Search results for: phonological processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3784

Search results for: phonological processing

634 Application of Groundwater Level Data Mining in Aquifer Identification

Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen

Abstract:

Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.

Keywords: aquifer identification, decision tree, groundwater, Fourier transform

Procedia PDF Downloads 157
633 Investigating Visual Statistical Learning during Aging Using the Eye-Tracking Method

Authors: Zahra Kazemi Saleh, Bénédicte Poulin-Charronnat, Annie Vinter

Abstract:

This study examines the effects of aging on visual statistical learning, using eye-tracking techniques to investigate this cognitive phenomenon. Visual statistical learning is a fundamental brain function that enables the automatic and implicit recognition, processing, and internalization of environmental patterns over time. Some previous research has suggested the robustness of this learning mechanism throughout the aging process, underscoring its importance in the context of education and rehabilitation for the elderly. The study included three distinct groups of participants, including 21 young adults (Mage: 19.73), 20 young-old adults (Mage: 67.22), and 17 old-old adults (Mage: 79.34). Participants were exposed to a series of 12 arbitrary black shapes organized into 6 pairs, each with different spatial configurations and orientations (horizontal, vertical, and oblique). These pairs were not explicitly revealed to the participants, who were instructed to passively observe 144 grids presented sequentially on the screen for a total duration of 7 min. In the subsequent test phase, participants performed a two-alternative forced-choice task in which they had to identify the most familiar pair from 48 trials, each consisting of a base pair and a non-base pair. Behavioral analysis using t-tests revealed notable findings. The mean score for the first group was significantly above chance, indicating the presence of visual statistical learning. Similarly, the second group also performed significantly above chance, confirming the persistence of visual statistical learning in young-old adults. Conversely, the third group, consisting of old-old adults, showed a mean score that was not significantly above chance. This lack of statistical learning in the old-old adult group suggests a decline in this cognitive ability with age. Preliminary eye-tracking results showed a decrease in the number and duration of fixations during the exposure phase for all groups. The main difference was that older participants focused more often on empty cases than younger participants, likely due to a decline in the ability to ignore irrelevant information, resulting in a decrease in statistical learning performance.

Keywords: aging, eye tracking, implicit learning, visual statistical learning

Procedia PDF Downloads 77
632 Effect of Roasting Temperature on the Proximate, Mineral and Antinutrient Content of Pigeon Pea (Cajanus cajan) Ready-to-Eat Snack

Authors: Olaide Ruth Aderibigbe, Oluwatoyin Oluwole

Abstract:

Pigeon pea is one of the minor leguminous plants; though underutilised, it is used traditionally by farmers to alleviate hunger and malnutrition. Pigeon pea is cultivated in Nigeria by subsistence farmers. It is rich in protein and minerals, however, its utilisation as food is only common among the poor and rural populace who cannot afford expensive sources of protein. One of the factors contributing to its limited use is the high antinutrient content which makes it indigestible, especially when eaten by children. The development of value-added products that can reduce the antinutrient content and make the nutrients more bioavailable will increase the utilisation of the crop and contribute to reduction of malnutrition. This research, therefore, determined the effects of different roasting temperatures (130 0C, 140 0C, and 150 0C) on the proximate, mineral and antinutrient component of a pigeon pea snack. The brown variety of pigeon pea seeds were purchased from a local market- Otto in Lagos, Nigeria. The seeds were cleaned, washed, and soaked in 50 ml of water containing sugar and salt (4:1) for 15 minutes, and thereafter the seeds were roasted at 130 0C, 140 0C, and 150 0C in an electric oven for 10 minutes. Proximate, minerals, phytate, tannin and alkaloid content analyses were carried out in triplicates following standard procedures. The results of the three replicates were polled and expressed as mean±standard deviation; a one-way analysis of variance (ANOVA) and the Least Significance Difference (LSD) were carried out. The roasting temperatures significantly (P<0.05) affected the protein, ash, fibre and carbohydrate content of the snack. Ready-to-eat snack prepared by roasting at 150 0C significantly had the highest protein (23.42±0.47%) compared the ones roasted at 130 0C and 140 0C (18.38±1.25% and 20.63±0.45%, respectively). The same trend was observed for the ash content (3.91±0.11 for 150 0C, 2.36±0.15 for 140 0C and 2.26±0.25 for 130 0C), while the fibre and carbohydrate contents were highest at roasting temperature of 130 0C. Iron, zinc, and calcium were not significantly (P<0.5) affected by the different roasting temperatures. Antinutrients decreased with increasing temperature. Phytate levels recorded were 0.02±0.00, 0.06±0.00, and 0.07±0.00 mg/g; tannin levels were 0.50±0.00, 0.57±0.00, and 0.68±0.00 mg/g, while alkaloids levels were 0.51±0.01, 0.78±0.01, and 0.82±0.01 mg/g for 150 0C, 140 0C, and 130 0C, respectively. These results show that roasting at high temperature (150 0C) can be utilised as a processing technique for increasing protein and decreasing antinutrient content of pigeon pea.

Keywords: antinutrients, pigeon pea, protein, roasting, underutilised species

Procedia PDF Downloads 143
631 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 129
630 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 167
629 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 159
628 Mesoporous Na2Ti3O7 Nanotube-Constructed Materials with Hierarchical Architecture: Synthesis and Properties

Authors: Neumoin Anton Ivanovich, Opra Denis Pavlovich

Abstract:

Materials based on titanium oxide compounds are widely used in such areas as solar energy, photocatalysis, food industry and hygiene products, biomedical technologies, etc. Demand for them has also formed in the battery industry (an example of this is the commercialization of Li4Ti5O12), where much attention has recently been paid to the development of next-generation systems and technologies, such as sodium-ion batteries. This dictates the need to search for new materials with improved characteristics, as well as ways to obtain them that meet the requirements of scalability. One of the ways to solve these problems can be the creation of nanomaterials that often have a complex of physicochemical properties that radically differ from the characteristics of their counterparts in the micro- or macroscopic state. At the same time, it is important to control the texture (specific surface area, porosity) of such materials. In view of the above, among other methods, the hydrothermal technique seems to be suitable, allowing a wide range of control over the conditions of synthesis. In the present study, a method was developed for the preparation of mesoporous nanostructured sodium trititanate (Na2Ti3O7) with a hierarchical architecture. The materials were synthesized by hydrothermal processing and exhibit a complex hierarchically organized two-layer architecture. At the first level of the hierarchy, materials are represented by particles having a roughness surface, and at the second level, by one-dimensional nanotubes. The products were found to have high specific surface area and porosity with a narrow pore size distribution (about 6 nm). As it is known, the specific surface area and porosity are important characteristics of functional materials, which largely determine the possibilities and directions of their practical application. Electrochemical impedance spectroscopy data show that the resulting sodium trititanate has a sufficiently high electrical conductivity. As expected, the synthesized complexly organized nanoarchitecture based on sodium trititanate with a porous structure can be practically in demand, for example, in the field of new generation electrochemical storage and energy conversion devices.

Keywords: sodium trititanate, hierarchical materials, mesoporosity, nanotubes, hydrothermal synthesis

Procedia PDF Downloads 107
627 Prediction of Formation Pressure Using Artificial Intelligence Techniques

Authors: Abdulmalek Ahmed

Abstract:

Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).

Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)

Procedia PDF Downloads 149
626 A Facile One Step Modification of Poly(dimethylsiloxane) via Smart Polymers for Biomicrofluidics

Authors: A. Aslihan Gokaltun, Martin L. Yarmush, Ayse Asatekin, O. Berk Usta

Abstract:

Poly(dimethylsiloxane) (PDMS) is one of the most widely used materials in the fabrication of microfluidic devices. It is easily patterned and can replicate features down to nanometers. Its flexibility, gas permeability that allows oxygenation, and low cost also drive its wide adoption. However, a major drawback of PDMS is its hydrophobicity and fast hydrophobic recovery after surface hydrophilization. This results in significant non-specific adsorption of proteins as well as small hydrophobic molecules such as therapeutic drugs limiting the utility of PDMS in biomedical microfluidic circuitry. While silicon, glass, and thermoplastics have been used, they come with problems of their own such as rigidity, high cost, and special tooling needs, which limit their use to a smaller user base. Many strategies to alleviate these common problems with PDMS are lack of general practical applicability, or have limited shelf lives in terms of the modifications they achieve. This restricts large scale implementation and adoption by industrial and research communities. Accordingly, we aim to tailor biocompatible PDMS surfaces by developing a simple and one step bulk modification approach with novel smart materials to reduce non-specific molecular adsorption and to stabilize long-term cell analysis with PDMS substrates. Smart polymers that blended with PDMS during device manufacture, spontaneously segregate to surfaces when in contact with aqueous solutions and create a < 1 nm layer that reduces non-specific adsorption of organic and biomolecules. Our methods are fully compatible with existing PDMS device manufacture protocols without any additional processing steps. We have demonstrated that our modified PDMS microfluidic system is effective at blocking the adsorption of proteins while retaining the viability of primary rat hepatocytes and preserving the biocompatibility, oxygen permeability, and transparency of the material. We expect this work will enable the development of fouling-resistant biomedical materials from microfluidics to hospital surfaces and tubing.

Keywords: cell culture, microfluidics, non-specific protein adsorption, PDMS, smart polymers

Procedia PDF Downloads 294
625 Corpus Stylistics and Multidimensional Analysis for English for Specific Purposes Teaching and Assessment

Authors: Svetlana Strinyuk, Viacheslav Lanin

Abstract:

Academic English has become lingua franca for international scientific community which stimulates universities to introduce English for Specific Purposes (EAP) courses into curriculum. Teaching L2 EAP students might be fulfilled with corpus technologies and digital stylistics. A special software developed to reach the manifold task of teaching, assessing and researching academic writing of L2 students on basis of digital stylistics and multidimensional analysis was created. A set of annotations (style markers) – grammar, lexical and syntactic features most significant of academic writing was built. Contrastive comparison of two corpora “model corpus”, subject domain limited papers published by competent writers in leading academic journals, and “students’ corpus”, subject domain limited papers written by last year students allows to receive data about the features of academic writing underused or overused by L2 EAP student. Both corpora are tagged with a special software created in GATE Developer. Style markers within the framework of research might be replaced depending on the relevance and validity of the result which is achieved from research corpora. Thus, selecting relevant (high frequency) style markers and excluding less relevant, i.e. less frequent annotations, high validity of the model is achieved. Software allows to compare the data received from processing model corpus to students’ corpus and get reports which can be used in teaching and assessment. The less deviation from the model corpus students demonstrates in their writing the higher is academic writing skill acquisition. The research showed that several style markers (hedging devices) were underused by L2 EAP students whereas lexical linking devices were used excessively. A special software implemented into teaching of EAP courses serves as a successful visual aid, makes assessment more valid; it is indicative of the degree of writing skill acquisition, and provides data for further research.

Keywords: corpus technologies in EAP teaching, multidimensional analysis, GATE Developer, corpus stylistics

Procedia PDF Downloads 200
624 Lead Chalcogenide Quantum Dots for Use in Radiation Detectors

Authors: Tom Nakotte, Hongmei Luo

Abstract:

Lead chalcogenide-based (PbS, PbSe, and PbTe) quantum dots (QDs) were synthesized for the purpose of implementing them in radiation detectors. Pb based materials have long been of interest for gamma and x-ray detection due to its high absorption cross section and Z number. The emphasis of the studies was on exploring how to control charge carrier transport within thin films containing the QDs. The properties of QDs itself can be altered by changing the size, shape, composition, and surface chemistry of the dots, while the properties of carrier transport within QD films are affected by post-deposition treatment of the films. The QDs were synthesized using colloidal synthesis methods and films were grown using multiple film coating techniques, such as spin coating and doctor blading. Current QD radiation detectors are based on the QD acting as fluorophores in a scintillation detector. Here the viability of using QDs in solid-state radiation detectors, for which the incident detectable radiation causes a direct electronic response within the QD film is explored. Achieving high sensitivity and accurate energy quantification in QD radiation detectors requires a large carrier mobility and diffusion lengths in the QD films. Pb chalcogenides-based QDs were synthesized with both traditional oleic acid ligands as well as more weakly binding oleylamine ligands, allowing for in-solution ligand exchange making the deposition of thick films in a single step possible. The PbS and PbSe QDs showed better air stability than PbTe. After precipitation the QDs passivated with the shorter ligand are dispersed in 2,6-difloupyridine resulting in colloidal solutions with concentrations anywhere from 10-100 mg/mL for film processing applications, More concentrated colloidal solutions produce thicker films during spin-coating, while an extremely concentrated solution (100 mg/mL) can be used to produce several micrometer thick films using doctor blading. Film thicknesses of micrometer or even millimeters are needed for radiation detector for high-energy gamma rays, which are of interest for astrophysics or nuclear security, in order to provide sufficient stopping power.

Keywords: colloidal synthesis, lead chalcogenide, radiation detectors, quantum dots

Procedia PDF Downloads 127
623 Recycled Cellulosic Fibers and Lignocellulosic Aggregates for Sustainable Building Materials

Authors: N. Stevulova, I. Schwarzova, V. Hospodarova, J. Junak, J. Briancin

Abstract:

Sustainability is becoming a priority for developers and the use of environmentally friendly materials is increasing. Nowadays, the application of raw materials from renewable sources to building materials has gained a significant interest in this research area. Lignocellulosic aggregates and cellulosic fibers are coming from many different sources such as wood, plants and waste. They are promising alternative materials to replace synthetic, glass and asbestos fibers as reinforcement in inorganic matrix of composites. Natural fibers are renewable resources so their cost is relatively low in comparison to synthetic fibers. With the consideration of environmental consciousness, natural fibers are biodegradable so their using can reduce CO2 emissions in the building materials production. The use of cellulosic fibers in cementitious matrices have gained importance because they make the composites lighter at high fiber content, they have comparable cost - performance ratios to similar building materials and they could be processed from waste paper, thus expanding the opportunities for waste utilization in cementitious materials. The main objective of this work is to find out the possibility of using different wastes: hemp hurds as waste of hemp stem processing and recycled fibers obtained from waste paper for making cement composite products such as mortars based on cellulose fibers. This material was made of cement mortar containing organic filler based on hemp hurds and recycled waste paper. In addition, the effects of fibers and their contents on some selected physical and mechanical properties of the fiber-cement plaster composites have been investigated. In this research organic material have used to mortars as 2.0, 5.0 and 10.0 % replacement of cement weight. Reference sample is made for comparison of physical and mechanical properties of cement composites based on recycled cellulosic fibers and lignocellulosic aggregates. The prepared specimens were tested after 28 days of curing in order to investigate density, compressive strength and water absorbability. Scanning Electron Microscopy examination was also carried out.

Keywords: Hemp hurds, organic filler, recycled paper, sustainable building materials

Procedia PDF Downloads 223
622 Analysis of Socio-Economics of Tuna Fisheries Management (Thunnus Albacares Marcellus Decapterus) in Makassar Waters Strait and Its Effect on Human Health and Policy Implications in Central Sulawesi-Indonesia

Authors: Siti Rahmawati

Abstract:

Indonesia has had long period of monetary economic crisis and it is followed by an upward trend in the price of fuel oil. This situation impacts all aspects of tuna fishermen community. For instance, the basic needs of fishing communities increase and the lower purchasing power then lead to economic and social instability as well as the health of fishermen household. To understand this AHP method is applied to acknowledge the model of tuna fisheries management priorities and cold chain marketing channel and the utilization levels that impact on human health. The study is designed as a development research with the number of 180 respondents. The data were analyzed by Analytical Hierarchy Process (AHP) method. The development of tuna fishery business can improve productivity of production with economic empowerment activities for coastal communities, improving the competitiveness of products, developing fish processing centers and provide internal capital for the development of optimal fishery business. From economic aspects, fishery business is more attracting because the benefit cost ratio of 2.86. This means that for 10 years, the economic life of this project can work well as B/C> 1 and therefore the rate of investment is economically viable. From the health aspects, tuna can reduce the risk of dying from heart disease by 50%, because tuna contain selenium in the human body. The consumption of 100 g of tuna meet 52.9% of the selenium in the body and activating the antioxidant enzyme glutathione peroxidaxe which can protect the body from free radicals and stimulate various cancers. The results of the analytic hierarchy process that the quality of tuna products is the top priority for export quality as well as quality control in order to compete in the global market. The implementation of the policy can increase the income of fishermen and reduce the poverty of fishermen households and have impact on the human health whose has high risk of disease.

Keywords: management of tuna, social, economic, health

Procedia PDF Downloads 316
621 Finite Element Analysis of Mechanical Properties of Additively Manufactured 17-4 PH Stainless Steel

Authors: Bijit Kalita, R. Jayaganthan

Abstract:

Additive manufacturing (AM) is a novel manufacturing method which provides more freedom in design, manufacturing near-net-shaped parts as per demand, lower cost of production, and expedition in delivery time to market. Among various metals, AM techniques, Laser Powder Bed Fusion (L-PBF) is the most prominent one that provides higher accuracy and powder proficiency in comparison to other methods. Particularly, 17-4 PH alloy is martensitic precipitation hardened (PH) stainless steel characterized by resistance to corrosion up to 300°C and tailorable strengthening by copper precipitates. Additively manufactured 17-4 PH stainless steel exhibited a dendritic/cellular solidification microstructure in the as-built condition. It is widely used as a structural material in marine environments, power plants, aerospace, and chemical industries. The excellent weldability of 17-4 PH stainless steel and its ability to be heat treated to improve mechanical properties make it a good material choice for L-PBF. In this study, the microstructures of martensitic stainless steels in the as-built state, as well as the effects of process parameters, building atmosphere, and heat treatments on the microstructures, are reviewed. Mechanical properties of fabricated parts are studied through micro-hardness and tensile tests. Tensile tests are carried out under different strain rates at room temperature. In addition, the effect of process parameters and heat treatment conditions on mechanical properties is critically reviewed. These studies revealed the performance of L-PBF fabricated 17–4 PH stainless-steel parts under cyclic loading, and the results indicated that fatigue properties were more sensitive to the defects generated by L-PBF (e.g., porosity, microcracks), leading to the low fracture strains and stresses under cyclic loading. Rapid melting, solidification, and re-melting of powders during the process and different combinations of processing parameters result in a complex thermal history and heterogeneous microstructure and are necessary to better control the microstructures and properties of L-PBF PH stainless steels through high-efficiency and low-cost heat treatments.

Keywords: 17–4 PH stainless steel, laser powder bed fusion, selective laser melting, microstructure, additive manufacturing

Procedia PDF Downloads 117
620 Cellular RNA-Binding Domains with Distant Homology in Viral Proteomes

Authors: German Hernandez-Alonso, Antonio Lazcano, Arturo Becerra

Abstract:

Until today, viruses remain controversial and poorly understood; about their origin, this problem represents an enigma and one of the great challenges for the contemporary biology. Three main theories have tried to explain the origin of viruses: regressive evolution, escaped host gene, and pre-cellular origin. Under the perspective of the escaped host gene theory, it can be assumed a cellular origin of viral components, like protein RNA-binding domains. These universal distributed RNA-binding domains are related to the RNA metabolism processes, including transcription, processing, and modification of transcripts, translation, RNA degradation and its regulation. In the case of viruses, these domains are present in important viral proteins like helicases, nucleases, polymerases, capsid proteins or regulation factors. Therefore, they are implicated in the replicative cycle and parasitic processes of viruses. That is why it is possible to think that those domains present low levels of divergence due to selective pressures. For these reasons, the main goal for this project is to create a catalogue of the RNA-binding domains found in all the available viral proteomes, using bioinformatics tools in order to analyze its evolutionary process, and thus shed light on the general virus evolution. ProDom database was used to obtain larger than six thousand RNA-binding domain families that belong to the three cellular domains of life and some viral groups. From the sequences of these families, protein profiles were created using HMMER 3.1 tools in order to find distant homologous within greater than four thousand viral proteomes available in GenBank. Once accomplished the analysis, almost three thousand hits were obtained in the viral proteomes. The homologous sequences were found in proteomes of the principal Baltimore viral groups, showing interesting distribution patterns that can contribute to understand the evolution of viruses and their host-virus interactions. Presence of cellular RNA-binding domains within virus proteomes seem to be explained by closed interactions between viruses and their hosts. Recruitment of these domains is advantageous for the viral fitness, allowing viruses to be adapted to the host cellular environment.

Keywords: bioinformatics tools, distant homology, RNA-binding domains, viral evolution

Procedia PDF Downloads 387
619 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech

Authors: Monica Gonzalez Machorro

Abstract:

Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.

Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment

Procedia PDF Downloads 127
618 A Furniture Industry Concept for a Sustainable Generative Design Platform Employing Robot Based Additive Manufacturing

Authors: Andrew Fox, Tao Zhang, Yuanhong Zhao, Qingping Yang

Abstract:

The furniture manufacturing industry has been slow in general to adopt the latest manufacturing technologies, historically relying heavily upon specialised conventional machinery. This approach not only requires high levels of specialist process knowledge, training, and capital investment but also suffers from significant subtractive manufacturing waste and high logistics costs due to the requirement for centralised manufacturing, with high levels of furniture product not re-cycled or re-used. This paper aims to address the problems by introducing suitable digital manufacturing technologies to create step changes in furniture manufacturing design, as the traditional design practices have been reported as building in 80% of environmental impact. In this paper, a 3D printing robot for furniture manufacturing is reported. The 3D printing robot mainly comprises a KUKA industrial robot, an Arduino microprocessor, and a self-assembled screw fed extruder. Compared to traditional 3D printer, the 3D printing robot has larger motion range and can be easily upgraded to enlarge the maximum size of the printed object. Generative design is also investigated in this paper, aiming to establish a combined design methodology that allows assessment of goals, constraints, materials, and manufacturing processes simultaneously. ‘Matrixing’ for part amalgamation and product performance optimisation is enabled. The generative design goals of integrated waste reduction increased manufacturing efficiency, optimised product performance, and reduced environmental impact institute a truly lean and innovative future design methodology. In addition, there is massive future potential to leverage Single Minute Exchange of Die (SMED) theory through generative design post-processing of geometry for robot manufacture, resulting in ‘mass customised’ furniture with virtually no setup requirements. These generatively designed products can be manufactured using the robot based additive manufacturing. Essentially, the 3D printing robot is already functional; some initial goals have been achieved and are also presented in this paper.

Keywords: additive manufacturing, generative design, robot, sustainability

Procedia PDF Downloads 132
617 Damage Detection in a Cantilever Beam under Different Excitation and Temperature Conditions

Authors: A. Kyprianou, A. Tjirkallis

Abstract:

Condition monitoring of structures in service is very important as it provides information about the risk of damage development. One of the essential constituents of structural condition monitoring is the damage detection methodology. In the context of condition monitoring of in service structures a damage detection methodology analyses data obtained from the structure while it is in operation. Usually, this means that the data could be affected by operational and environmental conditions in a way that could mask the effects of a possible damage on the data. This, depending on the damage detection methodology, could lead to either false alarms or miss existing damages. In this article a damage detection methodology that is based on the Spatio-temporal continuous wavelet transform (SPT-CWT) analysis of a sequence of experimental time responses of a cantilever beam is proposed. The cantilever is subjected to white and pink noise excitation to simulate different operating conditions. In addition, in order to simulate changing environmental conditions, the cantilever is subjected to heating by a heat gun. The response of the cantilever beam is measured by a high-speed camera. Edges are extracted from the series of images of the beam response captured by the camera. Subsequent processing of the edges gives a series of time responses on 439 points on the beam. This sequence is then analyzed using the SPT-CWT to identify damage. The algorithm proposed was able to clearly identify damage under any condition when the structure was excited by white noise force. In addition, in the case of white noise excitation, the analysis could also reveal the position of the heat gun when it was used to heat the structure. The analysis could identify the different operating conditions i.e. between responses due to white noise excitation and responses due to pink noise excitation. During the pink noise excitation whereas damage and changing temperature were identified it was not possible to clearly identify the effect of damage from that of temperature. The methodology proposed in this article for damage detection enables the separation the damage effect from that due to temperature and excitation on data obtained from measurements of a cantilever beam. This methodology does not require information about the apriori state of the structure.

Keywords: spatiotemporal continuous wavelet transform, damage detection, data normalization, varying temperature

Procedia PDF Downloads 279
616 Application of Raman Spectroscopy for Ovarian Cancer Detection: Comparative Analysis of Fresh, Formalin-Fixed, and Paraffin-Embedded Samples

Authors: Zeinab Farhat, Nicolas Errien, Romuald Wernert, Véronique Verriele, Frédéric Amiard, Philippe Daniel

Abstract:

Ovarian cancer, also known as the silent killer, is the fifth most common cancer among women worldwide, and its death rate is higher than that of other gynecological cancers. The low survival rate of women with high-grade serous ovarian carcinoma highlights the critical need for the development of new methods for early detection and diagnosis of the disease. The aim of this study was to evaluate if Raman spectroscopy combined with chemometric methods such as Principal Component Analysis (PCA) could differentiate between cancerous and normal tissues from different types of samples, such as paraffin embedding, chemical deparaffinized, formalin-fixed and fresh samples of the same normal and malignant ovarian tissue. The method was applied specifically to two critical spectral regions: the signature region (860-1000 〖cm〗^(-1)) and the high-frequency region (2800-3100 〖cm〗^(-1) ). The mean spectra of paraffin-embedded in normal and malignant tissues showed almost similar intensity. On the other hand, the mean spectra of normal and cancer tissues from chemical deparaffinized, formalin-fixed, and fresh samples show significant intensity differences. These spectral differences reflect variations in the molecular composition of the tissues, particularly lipids and proteins. PCA, which was applied to distinguish between cancer and normal tissues, was performed on whole spectra and on selected regions—the PCA score plot of paraffin-embedded shows considerable overlap between the two groups. However, the PCA score of chemicals deparaffinized, formalin-fixed, and fresh samples showed a good discrimination of tissue types. Our findings were validated by analyses of a set of samples whose status (normal and cancerous) was not previously known. The results of this study suggest that Raman Spectroscopy associated with PCA methods has the capacity to provide clinically significant differentiation between normal and cancerous ovarian tissues.

Keywords: Raman spectroscopy, ovarian cancer, signal processing, Principal Component Analysis, classification

Procedia PDF Downloads 29
615 Coherent Optical Tomography Imaging of Epidermal Hyperplasia in Vivo in a Mouse Model of Oxazolone Induced Atopic Dermatitis

Authors: Eric Lacoste

Abstract:

Laboratory animals are currently widely used as a model of human pathologies in dermatology such as atopic dermatitis (AD). These models provide a better understanding of the pathophysiology of this complex and multifactorial disease, the discovery of potential new therapeutic targets and the testing of the efficacy of new therapeutics. However, confirmation of the correct development of AD is mainly based on histology from skin biopsies requiring invasive surgery or euthanasia of the animals, plus slicing and staining protocols. However, there are currently accessible imaging technologies such as Optical Coherence Tomography (OCT), which allows non-invasive visualization of the main histological structures of the skin (like stratum corneum, epidermis, and dermis) and assessment of the dynamics of the pathology or efficacy of new treatments. Briefly, female immunocompetent hairless mice (SKH1 strain) were sensitized and challenged topically on back and ears for about 4 weeks. Back skin and ears thickness were measured using calliper at 3 occasions per week in complement to a macroscopic evaluation of atopic dermatitis lesions on back: erythema, scaling and excoriations scoring. In addition, OCT was performed on the back and ears of animals. OCT allows a virtual in-depth section (tomography) of the imaged organ to be made using a laser, a camera and image processing software allowing fast, non-contact and non-denaturing acquisitions of the explored tissues. To perform the imaging sessions, the animals were anesthetized with isoflurane, placed on a support under the OCT for a total examination time of 5 to 10 minutes. The results show a good correlation of the OCT technique with classical HES histology for skin lesions structures such as hyperkeratosis, epidermal hyperplasia, and dermis thickness. This OCT imaging technique can, therefore, be used in live animals at different times for longitudinal evaluation by repeated measurements of lesions in the same animals, in addition to the classical histological evaluation. Furthermore, this original imaging technique speeds up research protocols, reduces the number of animals and refines the use of the laboratory animal.

Keywords: atopic dermatitis, mouse model, oxzolone model, histology, imaging

Procedia PDF Downloads 132
614 An Integrated Approach to Solid Waste Management of Karachi, Pakistan (Waste-to-Energy Options)

Authors: Engineer Dilnawaz Shah

Abstract:

Solid Waste Management (SWM) is perhaps one of the most important elements constituting the environmental health and sanitation of the urban developing sector. The management system has several components that are integrated as well as interdependent; thus, the efficiency and effectiveness of the entire system are affected when any of its functional components fails or does not perform up to the level mark of operation. Sindh Solid Waste Management Board (SSWMB) is responsible for the management of solid waste in the entire city. There is a need to adopt the engineered approach in the redesigning of the existing system. In most towns, street sweeping operations have been mechanized and done by machinery operated by vehicles. Construction of Garbage Transfer Stations (GTS) at a number of locations within the city will cut the cost of transportation of waste to disposal sites. Material processing, recovery of recyclables, compaction, volume reduction, and increase in density will enable transportation of waste to disposal sites/landfills via long vehicles (bulk transport), minimizing transport/traffic and environmental pollution-related issues. Development of disposal sites into proper sanitary landfill sites is mandatory. The transportation mechanism is through garbage vehicles using either hauled or fixed container systems employing crew for mechanical or manual loading. The number of garbage vehicles is inadequate, and due to comparatively long haulage to disposal sites, there are certain problems of frequent vehicular maintenance and high fuel costs. Foreign investors have shown interest in enterprising improvement schemes and proposed operating a solid waste management system in Karachi. The waste to Energy option is being considered to provide a practical answer to be adopted to generate power and reduce waste load – a two-pronged solution for the increasing environmental problem. The paper presents results and analysis of a recent study into waste generation and characterization probing into waste-to-energy options for Karachi City.

Keywords: waste to energy option, integrated approach, solid waste management, physical and chemical composition of waste in Karachi

Procedia PDF Downloads 46
613 Revealing Single Crystal Quality by Insight Diffraction Imaging Technique

Authors: Thu Nhi Tran Caliste

Abstract:

X-ray Bragg diffraction imaging (“topography”)entered into practical use when Lang designed an “easy” technical setup to characterise the defects / distortions in the high perfection crystals produced for the microelectronics industry. The use of this technique extended to all kind of high quality crystals, and deposited layers, and a series of publications explained, starting from the dynamical theory of diffraction, the contrast of the images of the defects. A quantitative version of “monochromatic topography” known as“Rocking Curve Imaging” (RCI) was implemented, by using synchrotron light and taking advantage of the dramatic improvement of the 2D-detectors and computerised image processing. The rough data is constituted by a number (~300) of images recorded along the diffraction (“rocking”) curve. If the quality of the crystal is such that a one-to-onerelation between a pixel of the detector and a voxel within the crystal can be established (this approximation is very well fulfilled if the local mosaic spread of the voxel is < 1 mradian), a software we developped provides, from the each rocking curve recorded on each of the pixels of the detector, not only the “voxel” integrated intensity (the only data provided by the previous techniques) but also its “mosaic spread” (FWHM) and peak position. We will show, based on many examples, that this new data, never recorded before, open the field to a highly enhanced characterization of the crystal and deposited layers. These examples include the characterization of dislocations and twins occurring during silicon growth, various growth features in Al203, GaNand CdTe (where the diffraction displays the Borrmannanomalous absorption, which leads to a new type of images), and the characterisation of the defects within deposited layers, or their effect on the substrate. We could also observe (due to the very high sensitivity of the setup installed on BM05, which allows revealing these faint effects) that, when dealing with very perfect crystals, the Kato’s interference fringes predicted by dynamical theory are also associated with very small modifications of the local FWHM and peak position (of the order of the µradian). This rather unexpected (at least for us) result appears to be in keeping with preliminary dynamical theory calculations.

Keywords: rocking curve imaging, X-ray diffraction, defect, distortion

Procedia PDF Downloads 131
612 A Closed Loop Audit of Pre-operative Transfusion Samples in Orthopaedic Patients at a Major Trauma Centre

Authors: Tony Feng, Rea Thomson, Kathryn Greenslade, Ross Medine, Jennifer Easterbrook, Calum Arthur, Matilda Powell-bowns

Abstract:

There are clear guidelines on taking group and screen samples (G&S) for elective arthroplasty and major trauma. However, there is limited guidance on blood grouping for other trauma patients. The purpose of this study was to review the level of blood grouping at a major trauma centre and validate a protocol that limits the expensive processing of G&S samples. After reviewing the national guidance on transfusion samples in orthopaedic patients, data was prospectively collected for all orthopaedic admissions in the Royal Infirmary of Edinburgh between January to February 2023. The cause of admission, number of G&S samples processed on arrival and need for red cells was collected using the hospital blood bank. A new protocol was devised based on a multidisciplinary meeting which limited the requirement for G&S samples only to presentations in “category X”, including neck-of-femur fractures (NOFs), pelvic fractures and major trauma. A re-audit was completed between April and May after departmental education and institution of this protocol. 759 patients were admitted under orthopaedics in the major trauma centre across two separate months. 47% of patients were admitted with presentations falling in category X (354/759) and patients in this category accounted for 88% (92/104) of those requiring post-operative red cell transfusions. Of these, 51% were attributed to NOFs (47/92). In the initial audit, 50% of trauma patients outwith category X had samples sent (116/230), estimated to cost £3800. Of these 230 patients, 3% required post-operative transfusions (7/230). In the re-audit, 23% of patients outwith category X had samples sent (40/173), estimated to cost £1400, of which 3% (5/173) required transfusions. None of the transfusions in these patients in either audit were related to their operation and the protocol achieved an estimated cost saving of £2400 over one month. This study highlights the importance of sending samples for patients with certain categories of orthopaedic trauma (category X) due to the high demand for post-operative transfusions. However, the absence of transfusion requirements in other presentations suggests over-testing. While implementation of the new protocol has markedly reduced over-testing, additional interventions are required to reduce this further.

Keywords: blood transfusion, quality improvement, orthopaedics, trauma

Procedia PDF Downloads 76
611 Economics of Sugandhakokila (Cinnamomum Glaucescens (Nees) Dury) in Dang District of Nepal: A Value Chain Perspective

Authors: Keshav Raj Acharya, Prabina Sharma

Abstract:

Sugandhakokila (Cinnamomum glaucescens Nees. Dury) is a large evergreen native tree species; mostly confined naturally in mid-hills of Rapti Zone of Nepal. The species is identified as prioritized for agro-technology development as well as for research and development by a department of plant resources. This species is band for export outside the country without processing by the government of Nepal to encourage the value addition within the country. The present study was carried out in Chillikot village of Dang district to find out the economic contribution of C. glaucescens in the local economy and to document the major conservation threats for this species. Participatory Rural Appraisal (PRA) tools such as Household survey, key informants interviews and focus group discussions were carried out to collect the data. The present study reveals that about 1.7 million Nepalese rupees (NPR) have been contributed annually in the local economy of 29 households from the collection of C. glaucescens berries in the study area. The average annual income of each family was around NPR 67,165.38 (US$ 569.19) from the sale of the berries which contributes about 53% of the total household income. Six different value chain actors are involved in C. glaucescens business. Maximum profit margin was taken by collector followed by producer, exporter and processor. The profit margin was found minimum to regional and village traders. The total profit margin for producers was NPR 138.86/kg, and regional traders have gained NPR 17/kg. However, there is a possibility to increase the profit of producers by NPR 8.00 more for each kg of berries through the initiation of community forest user group and village cooperatives in the area. Open access resource, infestation by an insect to over matured trees and browsing by goats were identified as major conservation threats for this species. Handing over the national forest as a community forest, linking the producers with the processor through organized market channel and replacing the old tree through new plantation has been recommended for future.

Keywords: community forest, conservation threats, C. glaucescens, value chain analysis

Procedia PDF Downloads 140
610 Transition in Protein Profile, Maillard Reaction Products and Lipid Oxidation of Flavored Ultra High Temperature Treated Milk

Authors: Muhammad Ajmal

Abstract:

- Thermal processing and subsequent storage of ultra-heat treated (UHT) milk leads to alteration in protein profile, Maillard reaction and lipid oxidation. Concentration of carbohydrates in normal and flavored version of UHT milk is considerably different. Transition in protein profile, Maillard reaction and lipid oxidation in UHT flavored milk was determined for 90 days at ambient conditions and analyzed at 0, 45 and 90 days of storage. Protein profile, hydroxymethyl furfural, furosine, Nε-carboxymethyl-l-lysine, fatty acid profile, free fatty acids, peroxide value and sensory characteristics were determined. After 90 days of storage, fat, protein, total solids contents and pH were significantly less than the initial values determined at 0 day. As compared to protein profile normal UHT milk, more pronounced changes were recorded in different fractions of protein in UHT milk at 45 and 90 days of storage. Tyrosine content of flavored UHT milk at 0, 45 and 90 days of storage were 3.5, 6.9 and 15.2 µg tyrosine/ml. After 45 days of storage, the decline in αs1-casein, αs2-casein, β-casein, κ-casein, β-lactoglobulin, α-lactalbumin, immunoglobulin and bovine serum albumin were 3.35%, 10.5%, 7.89%, 18.8%, 53.6%, 20.1%, 26.9 and 37.5%. After 90 days of storage, the decline in αs1-casein, αs2-casein, β-casein, κ-casein, β-lactoglobulin, α-lactalbumin, immunoglobulin and bovine serum albumin were 11.2%, 34.8%, 14.3%, 33.9%, 56.9%, 24.8%, 36.5% and 43.1%. Hydroxy methyl furfural content of UHT milk at 0, 45 and 90 days of storage were 1.56, 4.18 and 7.61 (µmol/L). Furosine content of flavored UHT milk at 0, 45 and 90 days of storage intervals were 278, 392 and 561 mg/100g protein. Nε-carboxymethyl-l-lysine content of UHT flavored milk at 0, 45 and 90 days of storage were 67, 135 and 343mg/kg protein. After 90 days of storage of flavored UHT milk, the loss of unsaturated fatty acids 45.7% from the initial values. At 0, 45 and 90 days of storage, free fatty acids of flavored UHT milk were 0.08%, 0.11% and 0.16% (p<0.05). Peroxide value of flavored UHT milk at 0, 45 and 90 days of storage was 0.22, 0.65 and 2.88 (MeqO²/kg). Sensory analysis of flavored UHT milk after 90 days indicated that appearance, flavor and mouth feel score significantly decreased from the initial values recorded at 0 day. Findings of this investigation evidenced that in flavored UHT milk more pronounced changes take place in protein profile, Maillard reaction products and lipid oxidation as compared to normal UHT milk.

Keywords: UHT flavored milk , hydroxymethyl furfural, lipid oxidation, sensory properties

Procedia PDF Downloads 199
609 Humans’ Physical Strength Capacities on Different Handwheel Diameters and Angles

Authors: Saif K. Al-Qaisi, Jad R. Mansour, Aseel W. Sakka, Yousef Al-Abdallat

Abstract:

Handwheels are common to numerous industries, such as power generation plants, oil refineries, and chemical processing plants. The forces required to manually turn handwheels have been shown to exceed operators’ physical strengths, posing risks for injuries. Therefore, the objectives of this research were twofold: (1) to determine humans’ physical strengths on handwheels of different sizes and angles and (2) to subsequently propose recommended torque limits (RTLs) that accommodate the strengths of even the weaker segment of the population. Thirty male and thirty female participants were recruited from a university student population. Participants were asked to exert their maximum possible forces in a counter-clockwise direction on handwheels of different sizes (35 cm, 45 cm, 60 cm, and 70 cm) and angles (0°-horizontal, 45°-slanted, and 90°-vertical). The participant’s posture was controlled by adjusting the handwheel to be at the elbow level of each participant, requiring the participant to stand erect, and restricting the hand placements to be in the 10-11 o’clock position for the left hand and the 4-5 o’clock position for the right hand. A torque transducer (Futek TDF600) was used to measure the maximum torques generated by the human. Three repetitions were performed for each handwheel condition, and the average was computed. Results showed that, at all handwheel angles, as the handwheel diameter increased, the maximum torques generated also increased, while the underlying forces decreased. In controlling the handwheel diameter, the 0° handwheel was associated with the largest torques and forces, and the 45° handwheel was associated with the lowest torques and forces. Hence, a larger handwheel diameter –as large as 70 cm– in a 0° angle is favored for increasing the torque production capacities of users. Also, it was recognized that, regardless of the handwheel diameter size and angle, the torque demands in the field are much greater than humans’ torque production capabilities. As such, this research proposed RTLs for the different handwheel conditions by using the 25th percentile values of the females’ torque strengths. The proposed recommendations may serve future standard developers in defining torque limits that accommodate humans’ strengths.

Keywords: handwheel angle, handwheel diameter, humans’ torque production strengths, recommended torque limits

Procedia PDF Downloads 112
608 Cost-Effective Mechatronic Gaming Device for Post-Stroke Hand Rehabilitation

Authors: A. Raj Kumar, S. Bilaloglu

Abstract:

Stroke is a leading cause of adult disability worldwide. We depend on our hands for our activities of daily living(ADL). Although many patients regain the ability to walk, they continue to experience long-term hand motor impairments. As the number of individuals with young stroke is increasing, there is a critical need for effective approaches for rehabilitation of hand function post-stroke. Motor relearning for dexterity requires task-specific kinesthetic, tactile and visual feedback. However, when a stroke results in both sensory and motor impairment, it becomes difficult to ascertain when and what type of sensory substitutions can facilitate motor relearning. In an ideal situation, real-time task-specific data on the ability to learn and data-driven feedback to assist such learning will greatly assist rehabilitation for dexterity. We have found that kinesthetic and tactile information from the unaffected hand can assist patients re-learn the use of optimal fingertip forces during a grasp and lift task. Measurement of fingertip grip force (GF), load forces (LF), their corresponding rates (GFR and LFR), and other metrics can be used to gauge the impairment level and progress during learning. Currently ATI mini force-torque sensors are used in research settings to measure and compute the LF, GF, and their rates while grasping objects of different weights and textures. Use of the ATI sensor is cost prohibitive for deployment in clinical or at-home rehabilitation. A cost effective mechatronic device is developed to quantify GF, LF, and their rates for stroke rehabilitation purposes using off-the-shelf components such as load cells, flexi-force sensors, and an Arduino UNO microcontroller. A salient feature of the device is its integration with an interactive gaming environment to render a highly engaging user experience. This paper elaborates the integration of kinesthetic and tactile sensing through computation of LF, GF and their corresponding rates in real time, information processing, and interactive interfacing through augmented reality for visual feedback.

Keywords: feedback, gaming, kinesthetic, rehabilitation, tactile

Procedia PDF Downloads 240
607 Human Health Risk Assessment from Metals Present in a Soil Contaminated by Crude Oil

Authors: M. A. Stoian, D. M. Cocarta, A. Badea

Abstract:

The main sources of soil pollution due to petroleum contaminants are industrial processes involve crude oil. Soil polluted with crude oil is toxic for plants, animals, and humans. Human exposure to the contaminated soil occurs through different exposure pathways: Soil ingestion, diet, inhalation, and dermal contact. The present study research is focused on soil contamination with heavy metals as a consequence of soil pollution with petroleum products. Human exposure pathways considered are: Accidentally ingestion of contaminated soil and dermal contact. The purpose of the paper is to identify the human health risk (carcinogenic risk) from soil contaminated with heavy metals. The human exposure and risk were evaluated for five contaminants of concern of the eleven which were identified in soil. Two soil samples were collected from a bioremediation platform from Muntenia Region of Romania. The soil deposited on the bioremediation platform was contaminated through extraction and oil processing. For the research work, two average soil samples from two different plots were analyzed: The first one was slightly contaminated with petroleum products (Total Petroleum Hydrocarbons (TPH) in soil was 1420 mg/kgd.w.), while the second one was highly contaminated (TPH in soil was 24306 mg/kgd.w.). In order to evaluate risks posed by heavy metals due soil pollution with petroleum products, five metals known as carcinogenic were investigated: Arsenic (As), Cadmium (Cd), ChromiumVI (CrVI), Nickel (Ni), and Lead (Pb). Results of the chemical analysis performed on samples collected from the contaminated soil evidence soil contamination with heavy metals as following: As in Site 1 = 6.96 mg/kgd.w; As in Site 2 = 11.62 mg/kgd.w, Cd in Site 1 = 0.9 mg/kgd.w; Cd in Site 2 = 1 mg/kgd.w; CrVI was 0.1 mg/kgd.w for both sites; Ni in Site 1 = 37.00 mg/kgd.w; Ni in Site 2 = 42.46 mg/kgd.w; Pb in Site 1 = 34.67 mg/kgd.w; Pb in Site 2 = 120.44 mg/kgd.w. The concentrations for these metals exceed the normal values established in the Romanian regulation, but are smaller than the alert level for a less sensitive use of soil (industrial). Although, the concentrations do not exceed the thresholds, the next step was to assess the human health risk posed by soil contamination with these heavy metals. Results for risk were compared with the acceptable one (10-6, according to World Human Organization). As, expected, the highest risk was identified for the soil with a higher degree of contamination: Individual Risk (IR) was 1.11×10-5 compared with 8.61×10-6

Keywords: carcinogenic risk, heavy metals, human health risk assessment, soil pollution

Procedia PDF Downloads 422
606 Metabolically Healthy Obesity and Protective Factors of Cardiovascular Diseases as a Result from a Longitudinal Study in Tebessa (East of Algeria)

Authors: Salima Taleb, Kafila Boulaba, Ahlem Yousfi, Nada Taleb, Difallah Basma

Abstract:

Introduction: Obesity is recognized as a cardiovascular risk factor. It is associated with cardio-metabolic diseases. Its prevalence is increasing significantly in both rich and poor countries. However, there are obese people who have no metabolic disturbance. So we think obesity is not always a risk factor for an abnormal metabolic profile that increases the risk of cardiometabolic problems. However, there is no definition that allows us to identify the individual group Metabolically Healthy but Obese (MHO). Objective: The objective of this study is to evaluate the relationship between MHO and some factors associated with it. Methods: A longitudinal study is a prospective cohort study of 600 participants aged ≥18 years. Metabolic status was assessed by the following parameters: blood pressure, fasting glucose, total cholesterol, HDL cholesterol, LDL cholesterol, and triglycerides. Body Mass Index (BMI) was calculated as weight (in kg) divided by height (m2), BMI = Weight/(Height)². According to the BMI value, our population was divided into four groups: underweight subjects with BMI <18.5 kg/m2, normal weight subjects with BMI = 18.5–24.9 kg/m², overweight subjects with BMI=25–29.9 kg/m², and obese subjects who have (BMI ≥ 30 kg/m²). A value of P < 0.05 was considered significant. Statistical processing was done using the SPSS 25 software. Results: During this study, 194 (32.33%) were identified as MHO among 416 (37%) obese individuals. The prevalence of the metabolically unhealthy phenotype among normal-weight individuals was (13.83%) vs. (37%) in obese individuals. Compared with metabolically healthy normal-weight individuals (10.93%), the prevalence of diabetes was (30.60%) in MHO, (20.59%) in metabolically unhealthy normal weight, and (52.29%) for metabolically unhealthy obese (p = 0.032). Blood pressure was significantly higher in MHO individuals than in metabolically healthy normal-weight individuals and in metabolically unhealthy obese than in metabolically unhealthy normal weight (P < 0.0001). Familial coronary artery disease does not appear to have an effect on the metabolic status of obese and normal-weight patients (P = 0.544). However, waist circumference appears to have an effect on the metabolic status of individuals (P < 0.0001). Conclusion: This study showed a high prevalence of metabolic profile disruption in normal-weight subjects and a high rate of overweight and/or obese people who are metabolically healthy. To understand the physiological mechanism related to these metabolic statuses, a thorough study is needed.

Keywords: metabolically health, obesity, factors associated, cardiovascular diseases

Procedia PDF Downloads 117
605 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 135