Search results for: ether extract
220 Perspectives and Challenges a Functional Bread With Yeast Extract to Improve Human Diet
Authors: Cláudia Patrocínio, Beatriz Fernandes, Ana Filipa Pires
Abstract:
Background: Mirror therapy (MT) is used to improve motor function after stroke. During MT, a mirror is placed between the two upper limbs (UL), thus reflecting movements of the non- affected side as if it were the affected side. Objectives: The aim of this review is to analyze the evidence on the effec.tiveness of MT in the recovery of UL function in population with post chronic stroke. Methods: The literature search was carried out in PubMed, ISI Web of Science, and PEDro database. Inclusion criteria: a) studies that include individuals diagnosed with stroke for at least 6 months; b) intervention with MT in UL or comparing it with other interventions; c) articles published until 2023; d) articles published in English or Portuguese; e) randomized controlled studies. Exclusion criteria: a) animal studies; b) studies that do not provide a detailed description of the intervention; c) Studies using central electrical stimulation. The methodological quality of the included studies was assessed using the Physiotherapy Evidence Database (PEDro) scale. Studies with < 4 on PEDro scale were excluded. Eighteen studies met all the inclusion criteria. Main results and conclusions: The quality of the studies varies between 5 and 8. One article compared muscular strength training (MST) with MT vs without MT and four articles compared the use of MT vs conventional therapy (CT), one study compared extracorporeal shock therapy (EST) with and without MT and another study compared functional electrical stimulation (FES), MT and biofeedback, three studies compared MT with Mesh Glove (MG) or Sham Therapy, five articles compared performing bimanual exercises with and without MT and three studies compared MT with virtual reality (VR) or robot training (RT). The assessment of changes in function and structure (International Classification of Functioning, Disability and Health parameter) was carried out, in each article, mainly using the Fugl Meyer Assessment-Upper Limb scale, activity and participation (International Classification of Functioning, Disability and Health parameter) were evaluated using different scales, in each study. The positive results were seen in these parameters, globally. Results suggest that MT is more effective than other therapies in motor recovery and function of the affected UL, than these techniques alone, although the results have been modest in most of the included studies. There is also a more significant improvement in the distal movements of the affected hand than in the rest of the UL.Keywords: physical therapy, mirror therapy, chronic stroke, upper limb, hemiplegia
Procedia PDF Downloads 55219 Moderate Electric Field Influence on Carotenoids Extraction Time from Heterochlorella luteoviridis
Authors: Débora P. Jaeschke, Eduardo A. Merlo, Rosane Rech, Giovana D. Mercali, Ligia D. F. Marczak
Abstract:
Carotenoids are high value added pigments that can be alternatively extracted from some microalgae species. However, the application of carotenoids synthetized by microalgae is still limited due to the utilization of organic toxic solvents. In this context, studies involving alternative extraction methods have been conducted with more sustainable solvents to replace and reduce the solvent volume and the extraction time. The aim of the present work was to evaluate the extraction time of carotenoids from the microalgae Heterochlorella luteoviridis using moderate electric field (MEF) as a pre-treatment to the extraction. The extraction methodology consisted of a pre-treatment in the presence of MEF (180 V) and ethanol (25 %, v/v) for 10 min, followed by a diffusive step performed for 50 min using a higher ethanol concentration (75 %, v/v). The extraction experiments were conducted at 30 °C and, to keep the temperature at this value, it was used an extraction cell with a water jacket that was connected to a water bath. Also, to enable the evaluation of MEF effect on the extraction, control experiments were performed using the same cell and conditions without voltage application. During the extraction experiments, samples were withdrawn at 1, 5 and 10 min of the pre-treatment and at 1, 5, 30, 40 and 50 min of the diffusive step. Samples were, then, centrifuged and carotenoids analyses were performed in the supernatant. Furthermore, an exhaustive extraction with ethyl acetate and methanol was performed, and the carotenoids content found for this analyses was considered as the total carotenoids content of the microalgae. The results showed that the application of MEF as a pre-treatment to the extraction influenced the extraction yield and the extraction time during the diffusive step; after the MEF pre-treatment and 50 min of the diffusive step, it was possible to extract up to 60 % of the total carotenoids content. Also, results found for carotenoids concentration of the extracts withdrawn at 5 and 30 min of the diffusive step did not presented statistical difference, meaning that carotenoids diffusion occurs mainly in the very beginning of the extraction. On the other hand, the results for control experiments showed that carotenoids diffusion occurs mostly during 30 min of the diffusive step, which evidenced MEF effect on the extraction time. Moreover, carotenoids concentration on samples withdrawn during the pre-treatment (1, 5 and 10 min) were below the quantification limit of the analyses, indicating that the extraction occurred in the diffusive step, when ethanol (75 %, v/v) was added to the medium. It is possible that MEF promoted cell membrane permeabilization and, when ethanol (75 %) was added, carotenoids interacted with the solvent and the diffusion occurred easily. Based on the results, it is possible to infer that MEF promoted the decrease of carotenoids extraction time due to the increasing of the permeability of the cell membrane which facilitates the diffusion from the cell to the medium.Keywords: moderate electric field (MEF), pigments, microalgae, ethanol
Procedia PDF Downloads 464218 Analysis of Brownfield Soil Contamination Using Local Government Planning Data
Authors: Emma E. Hellawell, Susan J. Hughes
Abstract:
BBrownfield sites are currently being redeveloped for residential use. Information on soil contamination on these former industrial sites is collected as part of the planning process by the local government. This research project analyses this untapped resource of environmental data, using site investigation data submitted to a local Borough Council, in Surrey, UK. Over 150 site investigation reports were collected and interrogated to extract relevant information. This study involved three phases. Phase 1 was the development of a database for soil contamination information from local government reports. This database contained information on the source, history, and quality of the data together with the chemical information on the soil that was sampled. Phase 2 involved obtaining site investigation reports for development within the study area and extracting the required information for the database. Phase 3 was the data analysis and interpretation of key contaminants to evaluate typical levels of contaminants, their distribution within the study area, and relating these results to current guideline levels of risk for future site users. Preliminary results for a pilot study using a sample of the dataset have been obtained. This pilot study showed there is some inconsistency in the quality of the reports and measured data, and careful interpretation of the data is required. Analysis of the information has found high levels of lead in shallow soil samples, with mean and median levels exceeding the current guidance for residential use. The data also showed elevated (but below guidance) levels of potentially carcinogenic polyaromatic hydrocarbons. Of particular concern from the data was the high detection rate for asbestos fibers. These were found at low concentrations in 25% of the soil samples tested (however, the sample set was small). Contamination levels of the remaining chemicals tested were all below the guidance level for residential site use. These preliminary pilot study results will be expanded, and results for the whole local government area will be presented at the conference. The pilot study has demonstrated the potential for this extensive dataset to provide greater information on local contamination levels. This can help inform regulators and developers and lead to more targeted site investigations, improving risk assessments, and brownfield development.Keywords: Brownfield development, contaminated land, local government planning data, site investigation
Procedia PDF Downloads 140217 A Multi-Criteria Decision Making Approach for Disassembly-To-Order Systems under Uncertainty
Authors: Ammar Y. Alqahtani
Abstract:
In order to minimize the negative impact on the environment, it is essential to manage the waste that generated from the premature disposal of end-of-life (EOL) products properly. Consequently, government and international organizations introduced new policies and regulations to minimize the amount of waste being sent to landfills. Moreover, the consumers’ awareness regards environment has forced original equipment manufacturers to consider being more environmentally conscious. Therefore, manufacturers have thought of different ways to deal with waste generated from EOL products viz., remanufacturing, reusing, recycling, or disposing of EOL products. The rate of depletion of virgin natural resources and their dependency on the natural resources can be reduced by manufacturers when EOL products are treated as remanufactured, reused, or recycled, as well as this will cut on the amount of harmful waste sent to landfills. However, disposal of EOL products contributes to the problem and therefore is used as a last option. Number of EOL need to be estimated in order to fulfill the components demand. Then, disassembly process needs to be performed to extract individual components and subassemblies. Smart products, built with sensors embedded and network connectivity to enable the collection and exchange of data, utilize sensors that are implanted into products during production. These sensors are used for remanufacturers to predict an optimal warranty policy and time period that should be offered to customers who purchase remanufactured components and products. Sensor-provided data can help to evaluate the overall condition of a product, as well as the remaining lives of product components, prior to perform a disassembly process. In this paper, a multi-period disassembly-to-order (DTO) model is developed that takes into consideration the different system uncertainties. The DTO model is solved using Nonlinear Programming (NLP) in multiple periods. A DTO system is considered where a variety of EOL products are purchased for disassembly. The model’s main objective is to determine the best combination of EOL products to be purchased from every supplier in each period which maximized the total profit of the system while satisfying the demand. This paper also addressed the impact of sensor embedded products on the cost of warranties. Lastly, this paper presented and analyzed a case study involving various simulation conditions to illustrate the applicability of the model.Keywords: closed-loop supply chains, environmentally conscious manufacturing, product recovery, reverse logistics
Procedia PDF Downloads 138216 High Altitude Glacier Surface Mapping in Dhauliganga Basin of Himalayan Environment Using Remote Sensing Technique
Authors: Aayushi Pandey, Manoj Kumar Pandey, Ashutosh Tiwari, Kireet Kumar
Abstract:
Glaciers play an important role in climate change and are sensitive phenomena of global climate change scenario. Glaciers in Himalayas are unique as they are predominantly valley type and are located in tropical, high altitude regions. These glaciers are often covered with debris which greatly affects ablation rate of glaciers and work as a sensitive indicator of glacier health. The aim of this study is to map high altitude Glacier surface with a focus on glacial lake and debris estimation using different techniques in Nagling glacier of dhauliganga basin in Himalayan region. Different Image Classification techniques i.e. thresholding on different band ratios and supervised classification using maximum likelihood classifier (MLC) have been used on high resolution sentinel 2A level 1c satellite imagery of 14 October 2017.Here Near Infrared (NIR)/Shortwave Infrared (SWIR) ratio image was used to extract the glaciated classes (Snow, Ice, Ice Mixed Debris) from other non-glaciated terrain classes. SWIR/BLUE Ratio Image was used to map valley rock and Debris while Green/NIR ratio image was found most suitable for mapping Glacial Lake. Accuracy assessment was performed using high resolution (3 meters) Planetscope Imagery using 60 stratified random points. The overall accuracy of MLC was 85 % while the accuracy of Band Ratios was 96.66 %. According to Band Ratio technique total areal extent of glaciated classes (Snow, Ice ,IMD) in Nagling glacier was 10.70 km2 nearly 38.07% of study area comprising of 30.87 % Snow covered area, 3.93% Ice and 3.27 % IMD covered area. Non-glaciated classes (vegetation, glacial lake, debris and valley rock) covered 61.93 % of the total area out of which valley rock is dominant with 33.83% coverage followed by debris covering 27.7 % of the area in nagling glacier. Glacial lake and Debris were accurately mapped using Band ratio technique Hence, Band Ratio approach appears to be useful for the mapping of debris covered glacier in Himalayan Region.Keywords: band ratio, Dhauliganga basin, glacier mapping, Himalayan region, maximum likelihood classifier (MLC), Sentinel-2 satellite image
Procedia PDF Downloads 230215 Anti-Leishmanial Compounds from the Seaweed Padina pavonica
Authors: Nahal Najafi, Afsaneh Yegdaneh, Sedigheh Saberi
Abstract:
Introduction: Leishmaniasis poses a substantial global risk, affecting millions and resulting in thousands of cases each year in endemic regions. Challenges in current leishmaniasis treatments include drug resistance, high toxicity, and pancreatitis. Marine compounds, particularly brown algae, serve as a valuable source of inspiration for discovering treatments against Leishmania. Material and method: Padina pavonica was collected from the Persian Gulf. The seaweeds were dried and extracted with methanol: ethylacetate (1:1). The extract was partitioned to hexane (Hex), dicholoromethane (DCM), butanol, and water by Kupchan partitioning method. Hex partition was fractionated by silica gel column chromatography to 10 fractions (Fr. 1-10). Fr. 6 was further separated by the normal phase HPLC method to yield compounds 1-3. The structures of isolated compounds were elucidated by NMR, Mass, and other spectroscopic methods. Hex and DCM partitions, Fr. 6 and compounds 1-3, were tested for anti-leishmanicidal activity. RAW cell lines were cultured in enriched RPMI (10% FBS, 1% pen-strep) in a 37°C CO2 5% incubator, while promastigote cells were initially cultured in NNN culture and subsequently transferred to the aforementioned medium. Cytotoxicity was assessed using MTT tests, anti-promastigote activity was evaluated through Hemocytometer chamber promastigote counting, and the impact of amastigote damage was determined by counting amastigotes within 100 macrophages. Results: NMR and Mass identified isolated compounds as fucosterol and two sulfoquinovosyldiacylglycerols (SQDG). Among the samples tested, Fr.6 exhibited the highest cytotoxicity (CC50=60.24), while compound 2 showed the lowest cytotoxicity (CC50=21984). Compound 1 and dichloromethane fraction demonstrated the highest and lowest anti-promastigote activity (IC50=115.7, IC50=16.42, respectively), and compound 1 and hexane fraction exhibited the highest and lowest anti-amastigote activity (IC50=7.874, IC50=40.18, respectively). Conclusion: All six samples, including Hex and DCM partitions, Fr.6, and compounds 1-3, demonstrate a noteworthy correlation between rising concentration and time, with a statistically significant P-value of ≤0.05. Considering the higher selectivity index of compound 2 compared to others, it can be inferred that the presence of sulfur groups and unsaturated chains potentially contributes to these effects by impeding the DNA polymerase, which, of course, needs more research.Keywords: Padina, leishmania, sulfoquinovosyldiacylglycerol, cytotoxicity
Procedia PDF Downloads 23214 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.Keywords: classification, CRISP-DM, machine learning, predictive quality, regression
Procedia PDF Downloads 145213 Leadership and Corporate Social Responsibility: The Role of Spiritual Intelligence
Authors: Meghan E. Murray, Carri R. Tolmie
Abstract:
This study aims to identify potential factors and widely applicable best practices that can contribute to improving corporate social responsibility (CSR) and corporate performance for firms by exploring the relationship between transformational leadership, spiritual intelligence, and emotional intelligence. Corporate social responsibility is when companies are cognizant of the impact of their actions on the economy, their communities, the environment, and the world as a whole while executing business practices accordingly. The prevalence of CSR has continuously strengthened over the past few years and is now a common practice in the business world, with such efforts coinciding with what stakeholders and the public now expect from corporations. Because of this, it is extremely important to be able to pinpoint factors and best practices that can improve CSR within corporations. One potential factor that may lead to improved CSR is spiritual intelligence (SQ), or the ability to recognize and live with a purpose larger than oneself. Spiritual intelligence is a measurable skill, just like emotional intelligence (EQ), and can be improved through purposeful and targeted coaching. This research project consists of two studies. Study 1 is a case study comparison of a benefit corporation and a non-benefit corporation. This study will examine the role of SQ and EQ as moderators in the relationship between the transformational leadership of employees within each company and the perception of each firm’s CSR and corporate performance. Project methodology includes creating and administering a survey comprised of multiple pre-established scales on transformational leadership, spiritual intelligence, emotional intelligence, CSR, and corporate performance. Multiple regression analysis will be used to extract significant findings from the collected data. Study 2 will dive deeper into spiritual intelligence itself by analyzing pre-existing data and identifying key relationships that may provide value to companies and their stakeholders. This will be done by performing multiple regression analysis on anonymized data provided by Deep Change, a company that has created an advanced, proprietary system to measure spiritual intelligence. Based on the results of both studies, this research aims to uncover best practices, including the unique contribution of spiritual intelligence, that can be utilized by organizations to help enhance their corporate social responsibility. If it is found that high spiritual and emotional intelligence can positively impact CSR effort, then corporations will have a tangible way to enhance their CSR: providing targeted employees with training and coaching to increase their SQ and EQ.Keywords: corporate social responsibility, CSR, corporate performance, emotional intelligence, EQ, spiritual intelligence, SQ, transformational leadership
Procedia PDF Downloads 128212 Biflavonoids from Selaginellaceae as Epidermal Growth Factor Receptor Inhibitors and Their Anticancer Properties
Authors: Adebisi Adunola Demehin, Wanlaya Thamnarak, Jaruwan Chatwichien, Chatchakorn Eurtivong, Kiattawee Choowongkomon, Somsak Ruchirawat, Nopporn Thasana
Abstract:
The epidermal growth factor receptor (EGFR) is a transmembrane glycoprotein involved in cellular signalling processes and, its aberrant activity is crucial in the development of many cancers such as lung cancer. Selaginellaceae are fern allies that have long been used in Chinese traditional medicine to treat various cancer types, especially lung cancer. Biflavonoids, the major secondary metabolites in Selaginellaceae, have numerous pharmacological activities, including anti-cancer and anti-inflammatory. For instance, amentoflavone induces a cytotoxic effect in the human NSCLC cell line via the inhibition of PARP-1. However, to the best of our knowledge, there are no studies on biflavonoids as EGFR inhibitors. Thus, this study aims to investigate the EGFR inhibitory activities of biflavonoids isolated from Selaginella siamensis and Selaginella bryopteris. Amentoflavone, tetrahydroamentoflavone, sciadopitysin, robustaflavone, robustaflavone-4-methylether, delicaflavone, and chrysocauloflavone were isolated from the ethyl-acetate extract of the whole plants. The structures were determined using NMR spectroscopy and mass spectrometry. In vitro study was conducted to evaluate their cytotoxicity against A549, HEPG2, and T47D human cancer cell lines using the MTT assay. In addition, a target-based assay was performed to investigate their EGFR inhibitory activity using the kinase inhibition assay. Finally, a molecular docking study was conducted to predict the binding modes of the compounds. Robustaflavone-4-methylether and delicaflavone showed the best cytotoxic activity on all the cell lines with IC50 (µM) values of 18.9 ± 2.1 and 22.7 ± 3.3 on A549, respectively. Of these biflavonoids, delicaflavone showed the most potent EGFR inhibitory activity with an 84% relative inhibition at 0.02 nM using erlotinib as a positive control. Robustaflavone-4-methylether showed a 78% inhibition at 0.15 nM. The docking scores obtained from the molecular docking study correlated with the kinase inhibition assay. Robustaflavone-4-methylether and delicaflavone had a docking score of 72.0 and 86.5, respectively. The inhibitory activity of delicaflavone seemed to be linked with the C2”=C3” and 3-O-4”’ linkage pattern. Thus, this study suggests that the structural features of these compounds could serve as a basis for developing new EGFR-TK inhibitors.Keywords: anticancer, biflavonoids, EGFR, molecular docking, Selaginellaceae
Procedia PDF Downloads 198211 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA
Authors: Marek Dosbaba
Abstract:
Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data
Procedia PDF Downloads 111210 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 131209 Using Hemicellulosic Liquor from Sugarcane Bagasse to Produce Second Generation Lactic Acid
Authors: Regiane A. Oliveira, Carlos E. Vaz Rossell, Rubens Maciel Filho
Abstract:
Lactic acid, besides a valuable chemical may be considered a platform for other chemicals. In fact, the feasibility of hemicellulosic sugars as feedstock for lactic acid production process, may represent the drop of some of the barriers for the second generation bioproducts, especially bearing in mind the 5-carbon sugars from the pre-treatment of sugarcane bagasse. Bearing this in mind, the purpose of this study was to use the hemicellulosic liquor from sugarcane bagasse as a substrate to produce lactic acid by fermentation. To release of sugars from hemicellulose it was made a pre-treatment with a diluted sulfuric acid in order to obtain a xylose's rich liquor with low concentration of inhibiting compounds for fermentation (≈ 67% of xylose, ≈ 21% of glucose, ≈ 10% of cellobiose and arabinose, and around 1% of inhibiting compounds as furfural, hydroxymethilfurfural and acetic acid). The hemicellulosic sugars associated with 20 g/L of yeast extract were used in a fermentation process with Lactobacillus plantarum to produce lactic acid. The fermentation process pH was controlled with automatic injection of Ca(OH)2 to keep pH at 6.00. The lactic acid concentration remained stable from the time when the glucose was depleted (48 hours of fermentation), with no further production. While lactic acid is produced occurs the concomitant consumption of xylose and glucose. The yield of fermentation was 0.933 g lactic acid /g sugars. Besides, it was not detected the presence of by-products, what allows considering that the microorganism uses a homolactic fermentation to produce its own energy using pentose-phosphate pathway. Through facultative heterofermentative metabolism the bacteria consume pentose, as is the case of L. plantarum, but the energy efficiency for the cell is lower than during the hexose consumption. This implies both in a slower cell growth, as in a reduction in lactic acid productivity compared with the use of hexose. Also, L. plantarum had shown to have a capacity for lactic acid production from hemicellulosic hydrolysate without detoxification, which is very attractive in terms of robustness for an industrial process. Xylose from hydrolyzed bagasse and without detoxification is consumed, although the hydrolyzed bagasse inhibitors (especially aromatic inhibitors) affect productivity and yield of lactic acid. The use of sugars and the lack of need for detoxification of the C5 liquor from sugarcane bagasse hydrolyzed is a crucial factor for the economic viability of second generation processes. Taking this information into account, the production of second generation lactic acid using sugars from hemicellulose appears to be a good alternative to the complete utilization of sugarcane plant, directing molasses and cellulosic carbohydrates to produce 2G-ethanol, and hemicellulosic carbohydrates to produce 2G-lactic acid.Keywords: fermentation, lactic acid, hemicellulosic sugars, sugarcane
Procedia PDF Downloads 374208 Unlocking Health Insights: Studying Data for Better Care
Authors: Valentina Marutyan
Abstract:
Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.Keywords: data mining, healthcare, big data, large amounts of data
Procedia PDF Downloads 78207 Analyzing the Commentator Network Within the French YouTube Environment
Authors: Kurt Maxwell Kusterer, Sylvain Mignot, Annick Vignes
Abstract:
To our best knowledge YouTube is the largest video hosting platform in the world. A high number of creators, viewers, subscribers and commentators act in this specific eco-system which generates huge sums of money. Views, subscribers, and comments help to increase the popularity of content creators. The most popular creators are sponsored by brands and participate in marketing campaigns. For a few of them, this becomes a financially rewarding profession. This is made possible through the YouTube Partner Program, which shares revenue among creators based on their popularity. We believe that the role of comments in increasing the popularity is to be emphasized. In what follows, YouTube is considered as a bilateral network between the videos and the commentators. Analyzing a detailed data set focused on French YouTubers, we consider each comment as a link between a commentator and a video. Our research question asks what are the predominant features of a video which give it the highest probability to be commented on. Following on from this question, how can we use these features to predict the action of the agent in commenting one video instead of another, considering the characteristics of the commentators, videos, topics, channels, and recommendations. We expect to see that the videos of more popular channels generate higher viewer engagement and thus are more frequently commented. The interest lies in discovering features which have not classically been considered as markers for popularity on the platform. A quick view of our data set shows that 96% of the commentators comment only once on a certain video. Thus, we study a non-weighted bipartite network between commentators and videos built on the sub-sample of 96% of unique comments. A link exists between two nodes when a commentator makes a comment on a video. We run an Exponential Random Graph Model (ERGM) approach to evaluate which characteristics influence the probability of commenting a video. The creation of a link will be explained in terms of common video features, such as duration, quality, number of likes, number of views, etc. Our data is relevant for the period of 2020-2021 and focuses on the French YouTube environment. From this set of 391 588 videos, we extract the channels which can be monetized according to YouTube regulations (channels with at least 1000 subscribers and more than 4000 hours of viewing time during the last twelve months).In the end, we have a data set of 128 462 videos which consist of 4093 channels. Based on these videos, we have a data set of 1 032 771 unique commentators, with a mean of 2 comments per a commentator, a minimum of 1 comment each, and a maximum of 584 comments.Keywords: YouTube, social networks, economics, consumer behaviour
Procedia PDF Downloads 69206 Sensitivity Analysis of the Heat Exchanger Design in Net Power Oxy-Combustion Cycle for Carbon Capture
Authors: Hirbod Varasteh, Hamidreza Gohari Darabkhani
Abstract:
The global warming and its impact on climate change is one of main challenges for current century. Global warming is mainly due to the emission of greenhouse gases (GHG) and carbon dioxide (CO2) is known to be the major contributor to the GHG emission profile. Whilst the energy sector is the primary source for CO2 emission, Carbon Capture and Storage (CCS) are believed to be the solution for controlling this emission. Oxyfuel combustion (Oxy-combustion) is one of the major technologies for capturing CO2 from power plants. For gas turbines, several Oxy-combustion power cycles (Oxyturbine cycles) have been investigated by means of thermodynamic analysis. NetPower cycle is one of the leading oxyturbine power cycles with almost full carbon capture capability from a natural gas fired power plant. In this manuscript, sensitivity analysis of the heat exchanger design in NetPower cycle is completed by means of process modelling. The heat capacity variation and supercritical CO2 with gaseous admixtures are considered for multi-zone analysis with Aspen Plus software. It is found that the heat exchanger design has a major role to increase the efficiency of NetPower cycle. The pinch-point analysis is done to extract the composite and grand composite curve for the heat exchanger. In this paper, relationship between the cycle efficiency and the minimum approach temperature (∆Tmin) of the heat exchanger has also been evaluated. Increase in ∆Tmin causes a decrease in the temperature of the recycle flue gases (RFG) and an overall decrease in the required power for the recycled gas compressor. The main challenge in the design of heat exchangers in power plants is a tradeoff between the capital and operational costs. To achieve lower ∆Tmin, larger size of heat exchanger is required. This means a higher capital cost but leading to a better heat recovery and lower operational cost. To achieve this, ∆Tmin is selected from the minimum point in the diagrams of capital and operational costs. This study provides an insight into the NetPower Oxy-combustion cycle’s performance analysis and operational condition based on its heat exchanger design.Keywords: carbon capture and storage, oxy-combustion, netpower cycle, oxy turbine cycles, zero emission, heat exchanger design, supercritical carbon dioxide, oxy-fuel power plant, pinch point analysis
Procedia PDF Downloads 204205 Conjugated Linoleic Acid Effect on Body Weight and Body Composition in Women: Systematic Review and Meta-Analysis
Authors: Hanady Hamdallah, H. Elyse Ireland, John H. H. Williams
Abstract:
Conjugated linoleic acid (CLA) is a food supplement that is reported to have multiple beneficial health effects, including being anti-carcinogenic, anti-inflammatory and anti-obesity. Animal studies have shown a significant anti-obesity effect of CLA, but results in humans were inconsistent, where some of the studies found an anti-obesity effect while other studies failed to find any decline in obesity markers after CLA supplementation. This meta-analysis aimed to determine if oral CLA supplementation has been shown to reduce obesity related markers in women. Pub Med, Cochrane Library, and Google Scholar were used to identify the eligible trials using two main searching strategies: the first one was to search eligible trials using keywords 'Conjugated linoleic acid', 'CLA', 'Women', and the second strategy was to extract the eligible trials from previously published systematic reviews and meta-analyses. The eligible trials were placebo control trials where women supplemented with CLA mixture in the form of oral capsules for 6 months or less. Also, these trials provided information about body composition expressed as body weight (BW), body mass index (BMI), total body fat (TBF), percentage body fat (BF %), and/ or lean body mass (LBM). The quality of each included study was assessed using both JADAD scale and an adapted CONSERT checklist. Meta-analysis of 8 eligible trials showed that CLA supplementation was significantly associated with reduced BW (Mean ± SD, 1.2 ± 0.26 kg, p < 0.001), BMI (0.6 ± 0.13kg/m², p < 0.001) and TBF (0.76 ± 0.26 kg, p= 0.003) in women, when supplemented over 6-16 weeks. Subgroup meta-analysis demonstrated a significant reduction in BW (1.29 ± 0.31 kg, p < 0.001), BMI (0.60 ± 0.14 kg/m², p < 0.001) and TBF (0.82 ± 0.28 kg, p= 0.003) in the trials that had recruited overweight-obese women. The second subgroup meta-analysis, that considered the menopausal status of the participants, found that CLA was significantly associated with reduced BW (1.35 ± 0.37 kg, p < 0.001; 1.05 ± 0.36 kg, p= 0.003) and BMI (0.50 ± 0.17 kg/m², p= 0.003; 0.75 ± 0.2 kg/m², p < 0.001) in both pre and post-menopausal age women, respectively. A reduction in TBF (1.09 ± 0.37 kg, p= 0.003) was only significant in post-menopausal women. Interestingly, CLA supplementation was associated with a significant reduction in BW (1.05 ± 0.35 kg, p< 0.003), BMI (0.73 ± 0.2 kg/m², p < 0.001) and TBF (1.07 ± 0.36 kg, p= 0.003) in the trials without lifestyle monitoring or interventions. No significant effect of CLA on LBM was detected in this meta-analysis. This meta-analysis suggests a moderate anti-obesity effect of CLA on BW, BMI and TBF reduction in women, when supplemented over 6-16 weeks, particularly in overweight-obese women and post-menopausal women. However, this finding requires careful interpretation due to several issues in the designs of available CLA supplementation trials. More well-designed trials are required to confirm this meta-analysis results.Keywords: body composition, body mass index, body weight, conjugated linoleic acid
Procedia PDF Downloads 295204 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter
Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy
Abstract:
Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking
Procedia PDF Downloads 404203 The Use of Optical-Radar Remotely-Sensed Data for Characterizing Geomorphic, Structural and Hydrologic Features and Modeling Groundwater Prospective Zones in Arid Zones
Authors: Mohamed Abdelkareem
Abstract:
Remote sensing data contributed on predicting the prospective areas of water resources. Integration of microwave and multispectral data along with climatic, hydrologic, and geological data has been used here. In this article, Sentinel-2, Landsat-8 Operational Land Imager (OLI), Shuttle Radar Topography Mission (SRTM), Tropical Rainfall Measuring Mission (TRMM), and Advanced Land Observing Satellite (ALOS) Phased Array Type L‐band Synthetic Aperture Radar (PALSAR) data were utilized to identify the geological, hydrologic and structural features of Wadi Asyuti which represents a defunct tributary of the Nile basin, in the eastern Sahara. The image transformation of Sentinel-2 and Landsat-8 data allowed characterizing the different varieties of rock units. Integration of microwave remotely-sensed data and GIS techniques provided information on physical characteristics of catchments and rainfall zones that are of a crucial role for mapping groundwater prospective zones. A fused Landsat-8 OLI and ALOS/PALSAR data improved the structural elements that difficult to reveal using optical data. Lineament extraction and interpretation indicated that the area is clearly shaped by the NE-SW graben that is cut by NW-SE trend. Such structures allowed the accumulation of thick sediments in the downstream area. Processing of recent OLI data acquired on March 15, 2014, verified the flood potential maps and offered the opportunity to extract the extent of the flooding zone of the recent flash flood event (March 9, 2014), as well as revealed infiltration characteristics. Several layers including geology, slope, topography, drainage density, lineament density, soil characteristics, rainfall, and morphometric characteristics were combined after assigning a weight for each using a GIS-based knowledge-driven approach. The results revealed that the predicted groundwater potential zones (GPZs) can be arranged into six distinctive groups, depending on their probability for groundwater, namely very low, low, moderate, high very, high, and excellent. Field and well data validated the delineated zones.Keywords: GIS, remote sensing, groundwater, Egypt
Procedia PDF Downloads 98202 Satellite Interferometric Investigations of Subsidence Events Associated with Groundwater Extraction in Sao Paulo, Brazil
Authors: B. Mendonça, D. Sandwell
Abstract:
The Metropolitan Region of Sao Paulo (MRSP) has suffered from serious water scarcity. Consequently, the most convenient solution has been building wells to extract groundwater from local aquifers. However, it requires constant vigilance to prevent over extraction and future events that can pose serious threat to the population, such as subsidence. Radar imaging techniques (InSAR) have allowed continuous investigation of such phenomena. The analysis of data in the present study consists of 23 SAR images dated from October 2007 to March 2011, obtained by the ALOS-1 spacecraft. Data processing was made with the software GMTSAR, by using the InSAR technique to create pairs of interferograms with ground displacement during different time spans. First results show a correlation between the location of 102 wells registered in 2009 and signals of ground displacement equal or lower than -90 millimeters (mm) in the region. The longest time span interferogram obtained dates from October 2007 to March 2010. As a result, from that interferogram, it was possible to detect the average velocity of displacement in millimeters per year (mm/y), and which areas strong signals have persisted in the MRSP. Four specific areas with signals of subsidence of 28 mm/y to 40 mm/y were chosen to investigate the phenomenon: Guarulhos (Sao Paulo International Airport), the Greater Sao Paulo, Itaquera and Sao Caetano do Sul. The coverage area of the signals was between 0.6 km and 1.65 km of length. All areas are located above a sedimentary type of aquifer. Itaquera and Sao Caetano do Sul showed signals varying from 28 mm/y to 32 mm/y. On the other hand, the places most likely to be suffering from stronger subsidence are the ones in the Greater Sao Paulo and Guarulhos, right beside the International Airport of Sao Paulo. The rate of displacement observed in both regions goes from 35 mm/y to 40 mm/y. Previous investigations of the water use at the International Airport highlight the risks of excessive water extraction that was being done through 9 deep wells. Therefore, it is affirmed that subsidence events are likely to occur and to cause serious damage in the area. This study could show a situation that has not been explored with proper importance in the city, given its social and economic consequences. Since the data were only available until 2011, the question that remains is if the situation still persists. It could be reaffirmed, however, a scenario of risk at the International Airport of Sao Paulo that needs further investigation.Keywords: ground subsidence, Interferometric Satellite Aperture Radar (InSAR), metropolitan region of Sao Paulo, water extraction
Procedia PDF Downloads 355201 Clustering Ethno-Informatics of Naming Village in Java Island Using Data Mining
Authors: Atje Setiawan Abdullah, Budi Nurani Ruchjana, I. Gede Nyoman Mindra Jaya, Eddy Hermawan
Abstract:
Ethnoscience is used to see the culture with a scientific perspective, which may help to understand how people develop various forms of knowledge and belief, initially focusing on the ecology and history of the contributions that have been there. One of the areas studied in ethnoscience is etno-informatics, is the application of informatics in the culture. In this study the science of informatics used is data mining, a process to automatically extract knowledge from large databases, to obtain interesting patterns in order to obtain a knowledge. While the application of culture described by naming database village on the island of Java were obtained from Geographic Indonesia Information Agency (BIG), 2014. The purpose of this study is; first, to classify the naming of the village on the island of Java based on the structure of the word naming the village, including the prefix of the word, syllable contained, and complete word. Second to classify the meaning of naming the village based on specific categories, as well as its role in the community behavioral characteristics. Third, how to visualize the naming of the village to a map location, to see the similarity of naming villages in each province. In this research we have developed two theorems, i.e theorems area as a result of research studies have collected intersection naming villages in each province on the island of Java, and the composition of the wedge theorem sets the provinces in Java is used to view the peculiarities of a location study. The methodology in this study base on the method of Knowledge Discovery in Database (KDD) on data mining, the process includes preprocessing, data mining and post processing. The results showed that the Java community prioritizes merit in running his life, always working hard to achieve a more prosperous life, and love as well as water and environmental sustainment. Naming villages in each location adjacent province has a high degree of similarity, and influence each other. Cultural similarities in the province of Central Java, East Java and West Java-Banten have a high similarity, whereas in Jakarta-Yogyakarta has a low similarity. This research resulted in the cultural character of communities within the meaning of the naming of the village on the island of Java, this character is expected to serve as a guide in the behavior of people's daily life on the island of Java.Keywords: ethnoscience, ethno-informatics, data mining, clustering, Java island culture
Procedia PDF Downloads 283200 An EEG-Based Scale for Comatose Patients' Vigilance State
Authors: Bechir Hbibi, Lamine Mili
Abstract:
Understanding the condition of comatose patients can be difficult, but it is crucial to their optimal treatment. Consequently, numerous scoring systems have been developed around the world to categorize patient states based on physiological assessments. Although validated and widely adopted by medical communities, these scores still present numerous limitations and obstacles. Even with the addition of additional tests and extensions, these scoring systems have not been able to overcome certain limitations, and it appears unlikely that they will be able to do so in the future. On the other hand, physiological tests are not the only way to extract ideas about comatose patients. EEG signal analysis has helped extensively to understand the human brain and human consciousness and has been used by researchers in the classification of different levels of disease. The use of EEG in the ICU has become an urgent matter in several cases and has been recommended by medical organizations. In this field, the EEG is used to investigate epilepsy, dementia, brain injuries, and many other neurological disorders. It has recently also been used to detect pain activity in some regions of the brain, for the detection of stress levels, and to evaluate sleep quality. In our recent findings, our aim was to use multifractal analysis, a very successful method of handling multifractal signals and feature extraction, to establish a state of awareness scale for comatose patients based on their electrical brain activity. The results show that this score could be instantaneous and could overcome many limitations with which the physiological scales stock. On the contrary, multifractal analysis stands out as a highly effective tool for characterizing non-stationary and self-similar signals. It demonstrates strong performance in extracting the properties of fractal and multifractal data, including signals and images. As such, we leverage this method, along with other features derived from EEG signal recordings from comatose patients, to develop a scale. This scale aims to accurately depict the vigilance state of patients in intensive care units and to address many of the limitations inherent in physiological scales such as the Glasgow Coma Scale (GCS) and the FOUR score. The results of applying version V0 of this approach to 30 patients with known GCS showed that the EEG-based score similarly describes the states of vigilance but distinguishes between the states of 8 sedated patients where the GCS could not be applied. Therefore, our approach could show promising results with patients with disabilities, injected with painkillers, and other categories where physiological scores could not be applied.Keywords: coma, vigilance state, EEG, multifractal analysis, feature extraction
Procedia PDF Downloads 76199 Assessing Building Rooftop Potential for Solar Photovoltaic Energy and Rainwater Harvesting: A Sustainable Urban Plan for Atlantis, Western Cape
Authors: Adedayo Adeleke, Dineo Pule
Abstract:
The ongoing load-shedding in most parts of South Africa, combined with climate change causing severe drought conditions in Cape Town, has left electricity consumers seeking alternative sources of power and water. Solar energy, which is abundant in most parts of South Africa and is regarded as a clean and renewable source of energy, allows for the generation of electricity via solar photovoltaic systems. Rainwater harvesting is the collection and storage of rainwater from building rooftops, allowing people without access to water to collect it. The lack of dependable energy and water source must be addressed by shifting to solar energy via solar photovoltaic systems and rainwater harvesting. Before this can be done, the potential of building rooftops must be assessed to determine whether solar energy and rainwater harvesting will be able to meet or significantly contribute to Atlantis industrial areas' electricity and water demands. This research project presents methods and approaches for automatically extracting building rooftops in Atlantis industrial areas and evaluating their potential for solar photovoltaics and rainwater harvesting systems using Light Detection and Ranging (LiDAR) data and aerial imagery. The four objectives were to: (1) identify an optimal method of extracting building rooftops from aerial imagery and LiDAR data; (2) identify a suitable solar radiation model that can provide a global solar radiation estimate of the study area; (3) estimate solar photovoltaic potential overbuilding rooftop; and (4) estimate the amount of rainwater that can be harvested from the building rooftop in the study area. Mapflow, a plugin found in Quantum Geographic Information System(GIS) was used to automatically extract building rooftops using aerial imagery. The mean annual rainfall in Cape Town was obtained from a 29-year rainfall period (1991- 2020) and used to calculate the amount of rainwater that can be harvested from building rooftops. The potential for rainwater harvesting and solar photovoltaic systems was assessed, and it can be concluded that there is potential for these systems but only to supplement the existing resource supply and offer relief in times of drought and load-shedding.Keywords: roof potential, rainwater harvesting, urban plan, roof extraction
Procedia PDF Downloads 116198 Pomegranates Attenuates Cognitive and Behavioural Deficts and reduces inflammation in a Transgenic Mice Model of Alzheimer's Disease
Authors: M. M. Essa, S. Subash, M. Akbar, S. Al-Adawi, A. Al-Asmi, G. J. Guillemein
Abstract:
Objective: Transgenic (tg) mice which contain an amyloid precursor protein (APP) gene mutation, develop extracellular amyloid beta (Aβ) deposition in the brain, and severe memory and behavioural deficits with age. These mice serve as an important animal model for testing the efficacy of novel drug candidates for the treatment and management of symptoms of Alzheimer's disease (AD). Several reports have suggested that oxidative stress is the underlying cause of Aβ neurotoxicity in AD. Pomegranates contain very high levels of antioxidants and several medicinal properties that may be useful for improving the quality of life in AD patients. In this study, we investigated the effect of dietary supplementation of Omani pomegranate extract on the memory, anxiety and learning skills along with inflammation in an AD mouse model containing the double Swedish APP mutation (APPsw/Tg2576). Methods: The experimental groups of APP-transgenic mice from the age of 4 months were fed custom-mix diets (pellets) containing 4% pomegranate. We assessed spatial memory and learning ability, psychomotor coordination, and anxiety-related behavior in Tg and wild-type mice at the age of 4-5 months and 18-19 months using the Morris water maze test, rota rod test, elevated plus maze test, and open field test. Further, inflammatory parameters also analysed. Results: APPsw/Tg2576 mice that were fed a standard chow diet without pomegranates showed significant memory deficits, increased anxiety-related behavior, and severe impairment in spatial learning ability, position discrimination learning ability and motor coordination along with increased inflammation compared to the wild type mice on the same diet, at the age of 18-19 months In contrast, APPsw/Tg2576 mice that were fed a diet containing 4% pomegranates showed a significant improvements in memory, learning, locomotor function, and anxiety with reduced inflammatory markers compared to APPsw/Tg2576 mice fed the standard chow diet. Conclusion: Our results suggest that dietary supplementation with pomegranates may slow the progression of cognitive and behavioural impairments in AD. The exact mechanism is still unclear and further extensive research needed.Keywords: Alzheimer's disease, pomegranates, oman, cognitive decline, memory loss, anxiety, inflammation
Procedia PDF Downloads 529197 Tourism and Sport: The Acknowledgment of a Strong Relationship for the Environment Framed in a Literature Review
Authors: Rute Martins, Margarida Mascarenhas, Elsa Pereira
Abstract:
The importance between sport and the natural environment was researched through a systematic literature in order to analyse the available scientific articles on the association of sport -angling also the physical activity, active leisure and recreation- and environmental behaviour. The collected data were gathered within the last five years (from 2013 to April 2018) in the Scopus, Web of Science, ScienceDirect, Sage, Green Leaf Online Library, GreenFile (EBSCO) and Wiley online Library databases. The content analysis based on the qualitative methods employed in this study was made with Nvivo software. Regarding only the inclusion of scientific articles, more than half of the collected papers highlighted tourism as the main area where sports is being researched with regard to the environmental theme. Thus, it is possible to extract a perspective of the orientations of the ecological concerns in the sports tourism industry. As such, in the winter sports, the climate change is already an identified issue, wondering about the impact of the environment on the sports practice. In this context, there is a focus on the possible adaptative strategies, researching the characteristics of the sports tourist and the winter sports industry. Regarding the natural parks and protected areas (such as reefs), most of the research is on the environmental impact of the sports tourism, choosing the conservation and the protection of nature as the core topics. The research of the sports tourist profile is addressed by many articles, where the motives for practice and the environmental values are being scanned, and relations to the recreation specialization, environmental responsibility, environmental education, and place-attachment concepts are being made. Regarding the sustainable management, the sports tourism study area is approaching the research in a more holistic way; exploring the stakeholder’s interconnection, focusing on landscape planning and environmentally sustainable practices of sport tourism organizations. The natural parks, protected areas, coral reefs, and snow areas serve as the preferred case-studies for investigating the environmental impact and the ecotourism, in particular, studied through hiking and diving in the great majority. The results of the study are a valuable resource to understand the importance of the sports tourism in the environmental and sustainable action along with the need of embracing all stakeholders within the relationship between the sport and the natural environment.Keywords: ecotourism, environmental behaviour, outdoor recreation, sport tourism
Procedia PDF Downloads 164196 Integrating Virtual Reality and Building Information Model-Based Quantity Takeoffs for Supporting Construction Management
Authors: Chin-Yu Lin, Kun-Chi Wang, Shih-Hsu Wang, Wei-Chih Wang
Abstract:
A construction superintendent needs to know not only the amount of quantities of cost items or materials completed to develop a daily report or calculate the daily progress (earned value) in each day, but also the amount of quantities of materials (e.g., reinforced steel and concrete) to be ordered (or moved into the jobsite) for performing the in-progress or ready-to-start construction activities (e.g., erection of reinforced steel and concrete pouring). These daily construction management tasks require great effort in extracting accurate quantities in a short time (usually must be completed right before getting off work every day). As a result, most superintendents can only provide these quantity data based on either what they see on the site (high inaccuracy) or the extraction of quantities from two-dimension (2D) construction drawings (high time consumption). Hence, the current practice of providing the amount of quantity data completed in each day needs improvement in terms of more accuracy and efficiency. Recently, a three-dimension (3D)-based building information model (BIM) technique has been widely applied to support construction quantity takeoffs (QTO) process. The capability of virtual reality (VR) allows to view a building from the first person's viewpoint. Thus, this study proposes an innovative system by integrating VR (using 'Unity') and BIM (using 'Revit') to extract quantities to support the above daily construction management tasks. The use of VR allows a system user to be present in a virtual building to more objectively assess the construction progress in the office. This VR- and BIM-based system is also facilitated by an integrated database (consisting of the information and data associated with the BIM model, QTO, and costs). In each day, a superintendent can work through a BIM-based virtual building to quickly identify (via a developed VR shooting function) the building components (or objects) that are in-progress or finished in the jobsite. And he then specifies a percentage (e.g., 20%, 50% or 100%) of completion of each identified building object based on his observation on the jobsite. Next, the system will generate the completed quantities that day by multiplying the specified percentage by the full quantities of the cost items (or materials) associated with the identified object. A building construction project located in northern Taiwan is used as a case study to test the benefits (i.e., accuracy and efficiency) of the proposed system in quantity extraction for supporting the development of daily reports and the orders of construction materials.Keywords: building information model, construction management, quantity takeoffs, virtual reality
Procedia PDF Downloads 132195 Fracture Toughness Characterizations of Single Edge Notch (SENB) Testing Using DIC System
Authors: Amr Mohamadien, Ali Imanpour, Sylvester Agbo, Nader Yoosef-Ghodsi, Samer Adeeb
Abstract:
The fracture toughness resistance curve (e.g., J-R curve and crack tip opening displacement (CTOD) or δ-R curve) is important in facilitating strain-based design and integrity assessment of oil and gas pipelines. This paper aims to present laboratory experimental data to characterize the fracture behavior of pipeline steel. The influential parameters associated with the fracture of API 5L X52 pipeline steel, including different initial crack sizes, were experimentally investigated for a single notch edge bend (SENB). A total of 9 small-scale specimens with different crack length to specimen depth ratios were conducted and tested using single edge notch bending (SENB). ASTM E1820 and BS7448 provide testing procedures to construct the fracture resistance curve (Load-CTOD, CTOD-R, or J-R) from test results. However, these procedures are limited by standard specimens’ dimensions, displacement gauges, and calibration curves. To overcome these limitations, this paper presents the use of small-scale specimens and a 3D-digital image correlation (DIC) system to extract the parameters required for fracture toughness estimation. Fracture resistance curve parameters in terms of crack mouth open displacement (CMOD), crack tip opening displacement (CTOD), and crack growth length (∆a) were carried out from test results by utilizing the DIC system, and an improved regression fitting resistance function (CTOD Vs. crack growth), or (J-integral Vs. crack growth) that is dependent on a variety of initial crack sizes was constructed and presented. The obtained results were compared to the available results of the classical physical measurement techniques, and acceptable matchings were observed. Moreover, a case study was implemented to estimate the maximum strain value that initiates the stable crack growth. This might be of interest to developing more accurate strain-based damage models. The results of laboratory testing in this study offer a valuable database to develop and validate damage models that are able to predict crack propagation of pipeline steel, accounting for the influential parameters associated with fracture toughness.Keywords: fracture toughness, crack propagation in pipeline steels, CTOD-R, strain-based damage model
Procedia PDF Downloads 63194 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor
Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes
Abstract:
In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data
Procedia PDF Downloads 149193 Analyzing Growth Trends of the Built Area in the Precincts of Various Types of Tourist Attractions in India: 2D and 3D Analysis
Authors: Yarra Sulina, Nunna Tagore Sai Priya, Ankhi Banerjee
Abstract:
With the rapid growth in tourist arrivals, there has been a huge demand for the growth of infrastructure in the destinations. With the increasing preference of tourists to stay near attractions, there has been a considerable change in the land use around tourist sites. However, with the inclusion of certain regulations and guidelines provided by the authorities based on the nature of tourism activity and geographical constraints, the pattern of growth of built form is different for various tourist sites. Therefore, this study explores the patterns of growth of built-up for a decade from 2009 to 2019 through two-dimensional and three-dimensional analysis. Land use maps are created through supervised classification of satellite images obtained from LANDSAT 4-5 and LANDSAT 8 for 2009 and 2019, respectively. The overall expansion of the built-up area in the region is analyzed in relation to the distance from the city's geographical center and the tourism-related growth regions are identified which are influenced by the proximity of tourist attractions. The primary tourist sites of various destinations with different geographical characteristics and tourism activities, that have undergone a significant increase in built-up area and are occupied with tourism-related infrastructure are selected for further study. Proximity analysis of the tourism-related growth sites is carried out to delineate the influence zone of the tourist site in a destination. Further, a temporal analysis of volumetric growth of built form is carried out to understand the morphology of the tourist precincts over time. The Digital Surface Model (DSM) and Digital Terrain Model (DTM) are used to extract the building footprints along with building height. Factors such as building height, and building density are evaluated to understand the patterns of three-dimensional growth of the built area in the region. The study also explores the underlying reasons for such changes in built form around various tourist sites and predicts the impact of such growth patterns in the region. The building height and building density around tourist site creates a huge impact on the appeal of the destination. The surroundings that are incompatible with the theme of the tourist site have a negative impact on the attractiveness of the destination that leads to negative feedback by the tourists, which is not a sustainable form of development. Therefore, proper spatial measures are necessary in terms of area and volume of the built environment for a healthy and sustainable environment around the tourist sites in the destination.Keywords: sustainable tourism, growth patterns, land-use changes, 3-dimensional analysis of built-up area
Procedia PDF Downloads 79192 A Comparative Study of the Physicochemical and Structural Properties of Quinoa Protein Isolate and Yellow Squat Shrimp Byproduct Protein Isolate through pH-Shifting Modification
Authors: María José Bugueño, Natalia Jaime, Cristian Castro, Diego Naranjo, Guido Trautmann, Mario Pérez-Won, Vilbett Briones-Labarca
Abstract:
Proteins play a crucial role in various prepared foods, including dairy products, drinks, emulsions, and ready meals. These food proteins are naturally present in food waste and byproducts. The alkaline extraction and acid precipitation method is commonly used to extract proteins from plants and animals due to its product stability, cost-effectiveness, and ease of use. This study aimed to investigate the impact of pH-shifting storage at two different pH levels on the conformational changes affecting the physicochemical and functional properties of quinoa protein isolate (QPI) and yellow shrimp byproduct protein isolate (YSPI). The QPI and YSPI were extracted using the alkaline extraction-isoelectric precipitation method. The dispersions were adjusted to pH 4 or 12, stirred for 2 hours at 20°C to achieve a uniform dispersion, and then freeze-dried. Various analyses were conducted, including flexibility (F), free sulfhydryl content (Ho), emulsifying activity (EA), emulsifying capacity (EC), water holding capacity (WHC), oil holding capacity (OHC), intrinsic fluorescence, ultraviolet spectroscopy, differential scanning calorimetry (DSC), and Fourier transform infrared spectroscopy (FTIR) to assess the properties of the protein isolates. pH-shifting at pH 11 and 12 for QPI and YSPI, respectively, significantly improved protein properties, while property modification of the samples treated under acidic conditions was less pronounced. Additionally, the pH 11 and 12 treatments significantly improved F, Ho, EA, WHC, OHC, intrinsic fluorescence, ultraviolet spectroscopy, DSC, and FTIR. The increase in Ho was due to disulfide bond disruption, which produced more protein sub-units than other treatments for both proteins. This study provides theoretical support for comprehensively elucidating the functional properties of protein isolates, promoting the application of plant proteins and marine byproducts. The pH-shifting process effectively improves the emulsifying property and stability of QPI and YSPI, which can be considered potential plant-based or marine byproduct-based emulsifiers for use in the food industry.Keywords: quinoa protein, yellow shrimp by-product protein, physicochemical properties, structural properties
Procedia PDF Downloads 49191 Integration of the Electro-Activation Technology for Soy Meal Valorization
Authors: Natela Gerliani, Mohammed Aider
Abstract:
Nowadays, the interest of using sustainable technologies for protein extraction from underutilized oilseeds is growing. Currently, a major disposal problem for the oil industry is by-products of plant food processing such as soybean meal. That is why valorization of soybean meal is important for the oil industry since it contains high-quality proteins and other valuable components. Generally, soybean meal is used in livestock and poultry feed but is rarely used in human feed. Though chemical composition of this meal compensate nutritional deficiency and can be used to balance protein in human food. Regarding the efficiency of soybean meal valorization, extraction is a key process for obtaining enriched protein ingredient, which can be incorporated into the food matrix. However, most of the food components such as proteins extracted from oilseeds by-products imply the utilization of organic and inorganic chemicals (e.g. acids, bases, TCA-acetone) having a significant environmental impact. In a context of sustainable production, the use of an electro-activation technology seems to be a good alternative. Indeed, the electro-activation technology requires only water, food grade salt and electricity as main materials. Moreover, this innovative technology helps to avoid special equipment and trainings for workers safety as well as transport and storage of hazardous materials. Electro-activation is a technology based on applied electrochemistry for the generation of acidic and alkaline solutions on the basis of the oxidation-reduction reactions that occur at the vicinity electrode/solution interfaces. It is an eco-friendly process that can be used to replace the conventional acidic and alkaline extraction. In this research, the electro-activation technology for protein extraction from soybean meal was carried out in the electro-activation reactor. This reactor consists of three compartments separated by cation and anion exchange membranes that allow creating non-contacting acidic and basic solutions. Different current intensities (150 mA, 300 mA and 450 mA) and treatment durations (10 min, 30 min and 50 min) were tested. The results showed that the extracts obtained by the electro-activation method have good quality in comparison to conventional extracts. For instance, extractability obtained with electro-activation method was 55% whereas with the conventional method it was only 36%. Moreover, a maximum protein quantity of 48 % in the extract was obtained with the electro-activation technology comparing to the maximum amount of protein obtained by conventional extraction of 41 %. Hence, the environmentally sustainable electro-activation technology seems to be a promising type of protein extraction that can replace conventional extraction technology.Keywords: by-products, eco-friendly technology, electro-activation, soybean meal
Procedia PDF Downloads 229