Search results for: evaluated statistically
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7112

Search results for: evaluated statistically

902 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses

Authors: Matthew Baucum

Abstract:

With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.

Keywords: FMRI, machine learning, meta-analysis, text analysis

Procedia PDF Downloads 432
901 The Influence of Applying Mechanical Chest Compression Systems on the Effectiveness of Cardiopulmonary Resuscitation in Out-of-Hospital Cardiac Arrest

Authors: Slawomir Pilip, Michal Wasilewski, Daniel Celinski, Leszek Szpakowski, Grzegorz Michalak

Abstract:

The aim of the study was to evaluate the effectiveness of cardiopulmonary resuscitation taken by Medical Emergency Teams (MET) at the place of an accident including the usage of mechanical chest compression systems. In the period of January-May 2017, there were 137 cases of a sudden cardiac arrest in a chosen region of Eastern Poland with 360.000 inhabitants. Medical records and questionnaires filled by METs were analysed to prove the effectiveness of cardiopulmonary resuscitations that were considered to be effective when an early indication of spontaneous circulation was provided and the patient was taken to hospital. A chest compression system used by METs was applied in 60 cases (Lucas3 - 34 patients; Auto Pulse - 24 patients). The effectiveness of cardiopulmonary resuscitation among patients who were employed a chest compression system was much higher (43,3%) than the manual cardiac massage (36,4%). Thus, the usage of Lucas3 chest compression system resulted in 47% while Auto Pulse was 33,3%. The average ambulance arrival time could have had a significant impact on the subsequent effectiveness of cardiopulmonary resuscitation in these cases. Ambulances equipped with Lucas3 reached the destination within 8 minutes, and those with Auto Pulse needed 12,1 minutes. Moreover, taking effective basic life support (BLS) by bystanders before the ambulance arrival was much more frequent for ambulances with Lucas3 than Auto Pulse. Therefore, the percentage of BLS among the group of patients who were employed Lucas3 by METs was 26,5%, and 20,8% for Auto Pulse. The total percentage of taking BLS by bystanders before the ambulance arrival resulted in 25% of patients who were later applied a chest compression system by METs. Not only was shockable cardiac rhythm obtained in 47% of these cases, but an early indication of spontaneous circulation was also provided in all these patients. Both Lucas3 and Auto Pulse were evaluated to be significantly useful in improving the effectiveness of cardiopulmonary resuscitation by 97% of Medical Emergency Teams. Therefore, implementation of chest compression systems essentially makes the cardiopulmonary resuscitation even more effective. The ambulance arrival time, taking successful BLS by bystanders before the ambulance arrival and the presence of shockable cardiac rhythm determine an early indication of spontaneous circulation among patients after a sudden cardiac arrest.

Keywords: cardiac arrest, effectiveness, mechanical chest compression systems, resuscitation

Procedia PDF Downloads 235
900 Efficacy of Knowledge Management Practices in Selected Public Libraries in the Province of Kwazulu-Natal, South Africa

Authors: Petros Dlamini, Bethiweli Malambo, Maggie Masenya

Abstract:

Knowledge management practices are very important in public libraries, especial in the era of the information society. The success of public libraries depends on the recognition and application of knowledge management practices. The study investigates the value and challenges of knowledge management practices in public libraries. Three research objectives informed the study: to identify knowledge management practices in public libraries, understand the value of knowledge management practices in public libraries, and determine the factors hampering knowledge management practices in public libraries. The study was informed by the interpretivism research paradigm, which is associated with qualitative studies. In that light, the study collected data from eight librarians and or library heads, who were purposively selected from public libraries. The study adopted a social anthropological approach, which thoroughly evaluated each participant's response. Data was collected from the respondents through telephonic semi-structured interviews and assessed accordingly. Furthermore, the study used the latest content concept for data interpretation. The chosen data analysis method allowed the study to achieve its main purpose with concrete and valid information. The study's findings showed that all six (100%) selected public libraries apply knowledge management practices. The findings of the study revealed that public libraries have knowledge sharing as the main knowledge management practice. It was noted that public libraries employ many practices, but each library employed its practices of choice depending on their knowledge management practices structure. The findings further showed that knowledge management practices in public libraries are employed through meetings, training, information sessions, and awareness, to mention a few. The findings revealed that knowledge management practices make the libraries usable. Furthermore, it has been asserted that knowledge management practices in public libraries meet users’ needs and expectations and equip them with skills. It was discovered that all participating public libraries from Umkhanyakude district municipality valued their knowledge management practices as the pillar and foundation of services. Noticeably, knowledge management practices improve users ‘standard of living and build an information society. The findings of the study showed that librarians should be responsible for the value of knowledge management practices as they are qualified personnel. The results also showed that 83.35% of public libraries had factors hampering knowledge management practices. The factors are not limited to shortage of funds, resources and space, and political interference. Several suggestions were made to improve knowledge management practices in public libraries. These suggestions include improving the library budget, increasing libraries’ building sizes, and conducting more staff training.

Keywords: knowledge management, knowledge management practices, storage, dissemination

Procedia PDF Downloads 70
899 Antiangiogenic and Pro-Apoptotic Properties of Shemamruthaa: An Herbal Preparation in Experimental Mammary Carcinoma-Bearing Rats and Breast Cancer Cell Line In vitro

Authors: Nandhakumar Elumalai, Purushothaman Ayyakannu, Sachidanandam T. Panchanatham

Abstract:

Background: Understanding the basic mechanisms and factors underlying the tumor growth and invasion has gained attention in recent times. The processes of angiogenesis and apoptosis are known to play a vital role in various stages of cancer. The vascular endothelial growth factor (VEGF) is well established as one of the key regulators of tumor angiogenesis while MMPs are known for their exclusive ability to degrade ECM. Objective: The present study was designed to evaluate the pro apoptotic and anti angiogenic activity of the herbal formulation Shemamruthaa. The anticancer activity of Shemamruthaa was tested in breast cancer cell line (MCF-7). Results of MTT, trypan blue and flow cytometric analysis of apoptotis suggested that Shemamruthaa can induce cytotoxicity in cancer cells, in a concentration- and time dependent manner and induce apoptosis. With these results, we further evaluated the antiangiogenic and pro-apoptotic activities of Shemamruthaa in DMBA induced mammary carcinoma in Sprague Dawley rats. Flavono tumour was induced in 8-week-old Sprague-Dawley rats by gastric intubation of 25 mg DMBA in 1ml olive oil. After 90 days of induction period, the rats were orally administered with Shemamruthaa (400 mg/kg body wt) for 45 days. Treatment with the drug SM significantly modulated the expression of p53, MMP-2, MMP-3, MMP-9 and VEGF by means of its anti angiogenic and protease inhibiting activity. Conclusion: Based on these results, it might be concluded that the formulation, Shemamruthaa, constituted of dried flowers of Hibiscus rosa-sinensis, fruits of Emblica officinalis, and honey has been found to exhibit pronounced antiproliferative and apoptotic effects. This enhanced anticancer effect of Shemamruthaa might be attributed to the synergistic action of polyphenols such as flavonoids, tannins, alkaloids, glycosides, saponins, steroids, terpenoids, vitamin C, niacin, pyrogallol, hydroxymethylfurfural, trilinolein, and other compounds present in the formulation. Collectively, these results demonstrate that Shemamruthaa holds potential to be developed as a potent chemotherapeutic agent against mammary carcinoma.

Keywords: Shemamruthaa, flavonoids, MCF-7 cell line, mammary cancer

Procedia PDF Downloads 235
898 Loss of Control Eating as a Key Factor of the Psychological Symptomatology Related to Childhood Obesity

Authors: L. Beltran, S. Solano, T. Lacruz, M. Blanco, M. Rojo, M. Graell, A. R. Sepulveda

Abstract:

Introduction and Objective: Given the difficulties of assessing Binge Eating Disorder during childhood, episodes of Loss of Control (LOC) eating can be a key symptom. The objective is to know the prevalence of food psychopathology depending on the type of evaluation and find out which psychological characteristics differentiate overweight or obese children who present LOC from those who do not. Material and Methods: 170 children from 8 to 12 years of age with overweight or obesity (P > 85) were evaluated through the Primary Care Centers of Madrid. Sociodemographic data and psychological measures were collected through the Kiddie-Schedule for Affective Disorders & Schizophrenia, Present & Lifetime Version (K-SADS-PL) diagnostic interview and self-applied questionnaires: Children's eating attitudes (ChEAT), depressive symptomatology (CDI), anxiety (STAIC), general self-esteem (LAWSEQ), body self-esteem (BES), perceived teasing (POTS) and perfectionism (CAPS). Results: 15.2% of the sample exceeded the ChEAT cut-off point, presenting a risk of pathological eating; 5.88% presented an Eating Disorder through the diagnostic interview (2.35% Binge Eating disorder), and 33.53% had LOC episodes. No relationship was found between the presence of LOC and clinical diagnosis of eating disorders according to DSM-V; however, the group with LOC presented a higher risk of eating psychopathology using the ChEAT (p < .02). Significant differences were found in the group with LOC (p < .02): higher z-BMI, lower body self-esteem, greater anxious symptomatology, greater frequency of teasing towards weight, and greater effect of teasing both towards weight and competitions; compared to their peers without LOC. Conclusion: According to previous studies in samples with overweight children, in this Spanish sample of children with obesity, we found a prevalence of moderate eating disorder and a high presence of LOC episodes, which is related to both eating and general psychopathology. These findings confirm that the exclusion of LOC episodes as a diagnostic criterion can underestimate the presence of eating psychopathology during this developmental stage. According to these results, it is highly recommended to promote school context programs that approach LOC episodes in order to reduce associated symptoms. This study is included in a Project funded by the Ministry of Innovation and Science (PSI2011-23127).

Keywords: childhood obesity, eating psychopathology, loss-of-control eating, psychological symptomatology

Procedia PDF Downloads 91
897 Efficient Reuse of Exome Sequencing Data for Copy Number Variation Callings

Authors: Chen Wang, Jared Evans, Yan Asmann

Abstract:

With the quick evolvement of next-generation sequencing techniques, whole-exome or exome-panel data have become a cost-effective way for detection of small exonic mutations, but there has been a growing desire to accurately detect copy number variations (CNVs) as well. In order to address this research and clinical needs, we developed a sequencing coverage pattern-based method not only for copy number detections, data integrity checks, CNV calling, and visualization reports. The developed methodologies include complete automation to increase usability, genome content-coverage bias correction, CNV segmentation, data quality reports, and publication quality images. Automatic identification and removal of poor quality outlier samples were made automatically. Multiple experimental batches were routinely detected and further reduced for a clean subset of samples before analysis. Algorithm improvements were also made to improve somatic CNV detection as well as germline CNV detection in trio family. Additionally, a set of utilities was included to facilitate users for producing CNV plots in focused genes of interest. We demonstrate the somatic CNV enhancements by accurately detecting CNVs in whole exome-wide data from the cancer genome atlas cancer samples and a lymphoma case study with paired tumor and normal samples. We also showed our efficient reuses of existing exome sequencing data, for improved germline CNV calling in a family of the trio from the phase-III study of 1000 Genome to detect CNVs with various modes of inheritance. The performance of the developed method is evaluated by comparing CNV calling results with results from other orthogonal copy number platforms. Through our case studies, reuses of exome sequencing data for calling CNVs have several noticeable functionalities, including a better quality control for exome sequencing data, improved joint analysis with single nucleotide variant calls, and novel genomic discovery of under-utilized existing whole exome and custom exome panel data.

Keywords: bioinformatics, computational genetics, copy number variations, data reuse, exome sequencing, next generation sequencing

Procedia PDF Downloads 243
896 The Analyzer: Clustering Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human Computer Interaction

Authors: Dona Shaini Abhilasha Nanayakkara, Kurugamage Jude Pravinda Gregory Perera

Abstract:

E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. The Analyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling The Analyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.

Keywords: data clustering, data standardization, dimensionality reduction, human computer interaction, user profiling

Procedia PDF Downloads 61
895 Sorting Maize Haploids from Hybrids Using Single-Kernel Near-Infrared Spectroscopy

Authors: Paul R Armstrong

Abstract:

Doubled haploids (DHs) have become an important breeding tool for creating maize inbred lines, although several bottlenecks in the DH production process limit wider development, application, and adoption of the technique. DH kernels are typically sorted manually and represent about 10% of the seeds in a much larger pool where the remaining 90% are hybrid siblings. This introduces time constraints on DH production and manual sorting is often not accurate. Automated sorting based on the chemical composition of the kernel can be effective, but devices, namely NMR, have not achieved the sorting speed to be a cost-effective replacement to manual sorting. This study evaluated a single kernel near-infrared reflectance spectroscopy (skNIR) platform to accurately identify DH kernels based on oil content. The skNIR platform is a higher-throughput device, approximately 3 seeds/s, that uses spectra to predict oil content of each kernel from maize crosses intentionally developed to create larger than normal oil differences, 1.5%-2%, between DH and hybrid kernels. Spectra from the skNIR were used to construct a partial least squares regression (PLS) model for oil and for a categorical reference model of 1 (DH kernel) or 2 (hybrid kernel) and then used to sort several crosses to evaluate performance. Two approaches were used for sorting. The first used a general PLS model developed from all crosses to predict oil content and then used for sorting each induction cross, the second was the development of a specific model from a single induction cross where approximately fifty DH and one hundred hybrid kernels used. This second approach used a categorical reference value of 1 and 2, instead of oil content, for the PLS model and kernels selected for the calibration set were manually referenced based on traditional commercial methods using coloration of the tip cap and germ areas. The generalized PLS oil model statistics were R2 = 0.94 and RMSE = .93% for kernels spanning an oil content of 2.7% to 19.3%. Sorting by this model resulted in extracting 55% to 85% of haploid kernels from the four induction crosses. Using the second method of generating a model for each cross yielded model statistics ranging from R2s = 0.96 to 0.98 and RMSEs from 0.08 to 0.10. Sorting in this case resulted in 100% correct classification but required models that were cross. In summary, the first generalized model oil method could be used to sort a significant number of kernels from a kernel pool but was not close to the accuracy of developing a sorting model from a single cross. The penalty for the second method is that a PLS model would need to be developed for each individual cross. In conclusion both methods could find useful application in the sorting of DH from hybrid kernels.

Keywords: NIR, haploids, maize, sorting

Procedia PDF Downloads 290
894 Ionometallurgy for Recycling Silver in Silicon Solar Panel

Authors: Emmanuel Billy

Abstract:

This work is in the CABRISS project (H2020 projects) which aims at developing innovative cost-effective methods for the extraction of materials from the different sources of PV waste: Si based panels, thin film panels or Si water diluted slurries. Aluminum, silicon, indium, and silver will especially be extracted from these wastes in order to constitute materials feedstock which can be used later in a closed-loop process. The extraction of metals from silicon solar cells is often an energy-intensive process. It requires either smelting or leaching at elevated temperature, or the use of large quantities of strong acids or bases that require energy to produce. The energy input equates to a significant cost and an associated CO2 footprint, both of which it would be desirable to reduce. Thus there is a need to develop more energy-efficient and environmentally-compatible processes. Thus, ‘ionometallurgy’ could offer a new set of environmentally-benign process for metallurgy. This work demonstrates that ionic liquids provide one such method since they can be used to dissolve and recover silver. The overall process associates leaching, recovery and the possibility to re-use the solution in closed-loop process. This study aims to evaluate and compare different ionic liquids to leach and recover silver. An electrochemical analysis is first implemented to define the best system for the Ag dissolution. Effects of temperature, concentration and oxidizing agent are evaluated by this approach. Further, a comparative study between conventional approach (nitric acid, thiourea) and the ionic liquids (Cu and Al) focused on the leaching efficiency is conducted. A specific attention has been paid to the selection of the Ionic Liquids. Electrolytes composed of chelating anions are used to facilitate the lixiviation (Cl, Br, I,), avoid problems dealing with solubility issues of metallic species and of classical additional ligands. This approach reduces the cost of the process and facilitates the re-use of the leaching medium. To define the most suitable ionic liquids, electrochemical experiments have been carried out to evaluate the oxidation potential of silver include in the crystalline solar cells. Then, chemical dissolution of metals for crystalline solar cells have been performed for the most promising ionic liquids. After the chemical dissolution, electrodeposition has been performed to recover silver under a metallic form.

Keywords: electrodeposition, ionometallurgy, leaching, recycling, silver

Procedia PDF Downloads 229
893 Spectrophotometric Detection of Histidine Using Enzyme Reaction and Examination of Reaction Conditions

Authors: Akimitsu Kugimiya, Kouhei Iwato, Toru Saito, Jiro Kohda, Yasuhisa Nakano, Yu Takano

Abstract:

The measurement of amino acid content is reported to be useful for the diagnosis of several types of diseases, including lung cancer, gastric cancer, colorectal cancer, breast cancer, prostate cancer, and diabetes. The conventional detection methods for amino acid are high-performance liquid chromatography (HPLC) and liquid chromatography-mass spectrometry (LC-MS), but they have several drawbacks as the equipment is cumbersome and the techniques are costly in terms of time and costs. In contrast, biosensors and biosensing methods provide more rapid and facile detection strategies that use simple equipment. The authors have reported a novel approach for the detection of each amino acid that involved the use of aminoacyl-tRNA synthetase (aaRS) as a molecular recognition element because aaRS is expected to a selective binding ability for corresponding amino acid. The consecutive enzymatic reactions used in this study are as follows: aaRS binds to its cognate amino acid and releases inorganic pyrophosphate. Hydrogen peroxide (H₂O₂) was produced by the enzyme reactions of inorganic pyrophosphatase and pyruvate oxidase. The Trinder’s reagent was added into the reaction mixture, and the absorbance change at 556 nm was measured using a microplate reader. In this study, an amino acid-sensing method using histidyl-tRNA synthetase (HisRS; histidine-specific aaRS) as molecular recognition element in combination with the Trinder’s reagent spectrophotometric method was developed. The quantitative performance and selectivity of the method were evaluated, and the optimal enzyme reaction and detection conditions were determined. The authors developed a simple and rapid method for detecting histidine with a combination of enzymatic reaction and spectrophotometric detection. In this study, HisRS was used to detect histidine, and the reaction and detection conditions were optimized for quantitation of these amino acids in the ranges of 1–100 µM histidine. The detection limits are sufficient to analyze these amino acids in biological fluids. This work was partly supported by Hiroshima City University Grant for Special Academic Research (General Studies).

Keywords: amino acid, aminoacyl-tRNA synthetase, biosensing, enzyme reaction

Procedia PDF Downloads 268
892 Properties and Microstructure of Scaled-Up MgO Concrete Blocks Incorporating Fly Ash or Ground Granulated Blast-Furnace Slag

Authors: L. Pu, C. Unluer

Abstract:

MgO cements have the potential to sequester CO2 in construction products, and can be partial or complete replacement of PC in concrete. Construction block is a promising application for reactive MgO cements. Main advantages of blocks are: (i) suitability for sequestering CO2 due to their initially porous structure; (ii) lack of need for in-situ treatment as carbonation can take place during fabrication; and (iii) high potential for commercialization. Both strength gain and carbon sequestration of MgO cements depend on carbonation process. Fly ash and ground granulated blast-furnace slag (GGBS) are pozzolanic material and are proved to improve many of the performance characteristics of the concrete, such as strength, workability, permeability, durability and corrosion resistance. A very limited amount of work has been reported on the production of MgO blocks on a large scale so far. A much more extensive study, wherein blocks with different mix design is needed to verify the feasibility of commercial production. The changes in the performance of the samples were evaluated by compressive strength testing. The properties of the carbonation products were identified by X-ray diffraction (XRD) and scanning electron microscopy (SEM)/ field emission scanning electron microscopy (FESEM), and the degree of carbonation was obtained by thermogravimetric analysis (TGA), XRD and energy dispersive X-ray (EDX). The results of this study enabled the understanding the relationship between lab-scale samples and scale-up blocks based on their mechanical performance and microstructure. Results indicate that for both scaled-up and lab-scale samples, MgO samples always had the highest strength results, followed by MgO-fly ash samples and MgO-GGBS had relatively lowest strength. The lower strength of MgO with fly ash/GGBS samples at early stage is related to the relatively slow hydration process of pozzolanic materials. Lab-scale cubic samples were observed to have higher strength results than scaled-up samples. The large size of the scaled-up samples made it more difficult to let CO2 to reach inner part of the samples and less carbonation products formed. XRD, TGA and FESEM/EDX results indicate the existence of brucite and HMCs in MgO samples, M-S-H, hydrotalcite in the MgO-fly ash samples and C-S-H, hydrotalctie in the MgO-GGBS samples. Formation of hydration products (M-S-H, C-S-H, hydrotalcite) and carbonation products (hydromagnecite, dypingite) increased with curing duration, which is the reason of increasing strength. This study verifies the advantage of large-scale MgO blocks over common PC blocks and the feasibility of commercial production of MgO blocks.

Keywords: reactive MgO, fly ash, ground granulated blast-furnace slag, carbonation, CO₂

Procedia PDF Downloads 174
891 Relationship between Readability of Paper-Based Braille and Character Spacing

Authors: T. Nishimura, K. Doi, H. Fujimoto, T. Wada

Abstract:

The Number of people with acquired visual impairments has increased in recent years. In specialized courses at schools for the blind and in Braille lessons offered by social welfare organizations, many people with acquired visual impairments cannot learn to read adequately Braille. One of the reasons is that the common Braille patterns for people visual impairments who already has mature Braille reading skill being difficult to read for Braille reading beginners. In addition, there is the scanty knowledge of Braille book manufacturing companies regarding what Braille patterns would be easy to read for beginners. Therefore, it is required to investigate a suitable Braille patterns would be easy to read for beginners. In order to obtain knowledge regarding suitable Braille patterns for beginners, this study aimed to elucidate the relationship between readability of paper-based Braille and its patterns. This study focused on character spacing, which readily affects Braille reading ability, to determine a suitable character spacing ratio (ratio of character spacing to dot spacing) for beginners. Specifically, considering beginners with acquired visual impairments who are unfamiliar with reading Braille, we quantitatively evaluated the effect of character spacing ratio on Braille readability through an evaluation experiment using sighted subjects with no experience of reading Braille. In this experiment, ten sighted adults took the blindfold were asked to read test piece (three Braille characters). Braille used as test piece was composed of five dots. They were asked to touch the Braille by sliding their forefinger on the test piece immediately after the test examiner gave a signal to start the experiment. Then, they were required to release their forefinger from the test piece when they perceived the Braille characters. Seven conditions depended on character spacing ratio was held (i.e., 1.2, 1.4, 1.5, 1.6, 1.8, 2.0, 2.2 [mm]), and the other four depended on the dot spacing (i.e., 2.0, 2.5, 3.0, 3.5 [mm]). Ten trials were conducted for each conditions. The test pieces are created using by NISE Graphic could print Braille adjusted arbitrary value of character spacing and dot spacing with high accuracy. We adopted the evaluation indices for correct rate, reading time, and subjective readability to investigate how the character spacing ratio affects Braille readability. The results showed that Braille reading beginners could read Braille accurately and quickly, when character spacing ratio is more than 1.8 and dot spacing is more than 3.0 mm. Furthermore, it is difficult to read Braille accurately and quickly for beginners, when both character spacing and dot spacing are small. For this study, suitable character spacing ratio to make reading easy for Braille beginners is revealed.

Keywords: Braille, character spacing, people with visual impairments, readability

Procedia PDF Downloads 270
890 Maintenance Wrench Time Improvement Project

Authors: Awadh O. Al-Anazi

Abstract:

As part of the organizational needs toward successful maintaining activities, a proper management system need to be put in place, ensuring the effectiveness of maintenance activities. The management system shall clearly describes the process of identifying, prioritizing, planning, scheduling, execution, and providing valuable feedback for all maintenance activities. Completion and accuracy of the system with proper implementation shall provide the organization with a strong platform for effective maintenance activities that are resulted in efficient outcomes toward business success. The purpose of this research was to introduce a practical tool for measuring the maintenance efficiency level within Saudi organizations. A comprehensive study was launched across many maintenance professionals throughout Saudi leading organizations. The study covered five main categories: work process, identification, planning and scheduling, execution, and performance monitoring. Each category was evaluated across many dimensions to determine its current effectiveness through a five-level scale from 'process is not there' to 'mature implementation'. Wide participation was received, responses were analyzed, and the study was concluded by highlighting major gaps and improvement opportunities within Saudi organizations. One effective implementation of the efficiency enhancement efforts was deployed in Saudi Kayan (one of Sabic affiliates). Below details describes the project outcomes: SK overall maintenance wrench time was measured at 20% (on average) from the total daily working time. The assessment indicates the appearance of several organizational gaps, such as a high amount of reactive work, poor coordination and teamwork, Unclear roles and responsibilities, as well as underutilization of resources. Multidiscipline team was assigned to design and implement an appropriate work process that is capable to govern the execution process, improve the maintenance workforce efficiency, and maximize wrench time (targeting > 50%). The enhanced work process was introduced through brainstorming and wide benchmarking, incorporated with a proper change management plan and leadership sponsorship. The project was completed in 2018. Achieved Results: SK WT was improved to 50%, which resulted in 1) reducing the Average Notification completion time. 2) reducing maintenance expenses on OT and manpower support (3.6 MSAR Actual Saving from Budget within 6 months).

Keywords: efficiency, enhancement, maintenance, work force, wrench time

Procedia PDF Downloads 123
889 Formulation, Preparation, and Evaluation of Coated Desloratadine Oral Disintegrating Tablets

Authors: Mohamed A. Etman, Mona G. Abd-Elnasser, Mohamed A. Shams-Eldin, Aly H. Nada

Abstract:

Orally disintegrating tablets (ODTs) are gaining importance as new drug delivery systems and emerged as one of the popular and widely accepted dosage forms, especially for the pediatric and geriatric patients. Their advantages such as administration without water, anywhere, anytime lead to their suitability to geriatric and pediatric patients. They are also suitable for the mentally ill, the bed-ridden and patients who do not have easy access to water. The benefits, in terms of patient compliance, rapid onset of action, increased bioavailability, and good stability make these tablets popular as a dosage form of choice in the current market. These dosage forms dissolve or disintegrate in the oral cavity within a matter of seconds without the need of water or chewing. Desloratadine is a tricyclic antihistaminic, which has a selective and peripheral H1-antagonist action. It is an antagonist at histamine H1 receptors, and an antagonist at all subtypes of the muscarinic acetylcholine receptor. Desloratadine is the major metabolite of loratadine. Twelve different placebos ODT were prepared (F1-F12) using different functional excipients. They were evaluated for their compressibility, hardness and disintegration time. All formulations were non sticky except four formulations; namely (F8, F9, F10, F11). All formulations were compressible with the exception of (F2). Variable disintegration times were found ranging between 20 and 120 seconds. It was found that (F12) showed the least disintegration time (20 secs) without showing any sticking which could be due to the use of high percentage of superdisintegrants. Desloratadine showed bitter taste when formulated as ODT without any treatment. Therefore, different techniques were tried in order to mask its bitter taste. Using Eudragit EPO resulted in complete masking of the bitter taste of the drug and increased the acceptability to volunteers. The compressible non sticky formulations (F1, F3, F4, F5, F6, F7 and F12) were subjected to further evaluation tests after addition of coated desloratadine, including weight uniformity, wetting time, and friability testing.. Fairly good weight uniformity values were observed in all the tested formulations. F12 exhibiting the shortest wetting time (14.7 seconds) and consequently the lowest (20 seconds) disintegration time. Dissolution profile showed that 100% desloratadine release was attained after only 2.5 minutes from the prepared ODT (F12) with dissolution efficiency of 95%.

Keywords: Desloratadine, orally disintegrating tablets (ODTs), formulations, taste masking

Procedia PDF Downloads 439
888 A Differential Scanning Calorimetric Study of Frozen Liquid Egg Yolk Thawed by Different Thawing Methods

Authors: Karina I. Hidas, Csaba Németh, Anna Visy, Judit Csonka, László Friedrich, Ildikó Cs. Nyulas-Zeke

Abstract:

Egg yolk is a popular ingredient in the food industry due to its gelling, emulsifying, colouring, and coagulating properties. Because of the heat sensitivity of proteins, egg yolk can only be heat treated at low temperatures, so its shelf life, even with the addition of a preservative, is only a few weeks. Freezing can increase the shelf life of liquid egg yolk up to 1 year, but it undergoes gelling below -6 ° C, which is an irreversible phenomenon. The degree of gelation depends on the time and temperature of freezing and is influenced by the process of thawing. Therefore, in our experiment, we examined egg yolks thawed in different ways. In this study, unpasteurized, industrially broken, separated, and homogenized liquid egg yolk was used. Freshly produced samples were frozen in plastic containers at -18°C in a laboratory freezer. Frozen storage was performed for 90 days. Samples were analysed at day zero (unfrozen) and after frozen storage for 1, 7, 14, 30, 60 and 90 days. Samples were thawed in two ways (at 5°C for 24 hours and 30°C for 3 hours) before testing. Calorimetric properties were examined by differential scanning calorimetry, where heat flow curves were recorded. Denaturation enthalpy values were calculated by fitting a linear baseline, and denaturation temperature values were evaluated. Besides, dry matter content of samples was measured by the oven method with drying at 105°C to constant weight. For statistical analysis two-way ANOVA (α = 0.05) was employed, where thawing mode and freezing time were the fixed factors. Denaturation enthalpy values decreased from 1.1 to 0.47 at the end of the storage experiment, which represents a reduction of about 60%. The effect of freezing time was significant on these values, already the enthalpy of samples stored frozen for 1 day was significantly reduced. However, the mode of thawing did not significantly affect the denaturation enthalpy of the samples, and no interaction was seen between the two factors. The denaturation temperature and dry matter content did not change significantly either during the freezing period or during the defrosting mode. Results of our study show that slow freezing and frozen storage at -18°C greatly reduces the amount of protein that can be denatured in egg yolk, indicating that the proteins have been subjected to aggregation, denaturation or other protein conversions regardless of how they were thawed.

Keywords: denaturation enthalpy, differential scanning calorimetry, liquid egg yolk, slow freezing

Procedia PDF Downloads 111
887 Evaluation of Neuroprotective Potential of Olea europaea and Malus domestica in Experimentally Induced Stroke Rat Model

Authors: Humaira M. Khan, Kanwal Asif

Abstract:

Ischemic stroke is a neurological disorder with a complex pathophysiology associated with motor, sensory and cognitive deficits. Major approaches developed to treat acute ischemic stroke fall into two categories, thrombolysis and neuroprotection. The objectives of this study were to evaluate the neuroprotective and anti-thrombolytic effects of Olea europaea (olive oil) and Malus domestica (apple cider vinegar) and their combination in rat stroke model. Furthermore, histopathological analysis was also performed to assess the severity of ischemia among treated and reference groups. Male albino rats (12 months age) weighing 300- 350gm were acclimatized and subjected to middle cerebral artery occlusion method for stroke induction. Olea europaea and Malus domestica was administered orally in dose of 0.75ml/kg and 3ml/kg and combination was administered at dose of 0.375ml/kg and 1.5ml/kg prophylactically for consecutive 21 days. Negative control group was dosed with normal saline whereas piracetam (250mg/kg) was administered as reference. Neuroprotective activity of standard piracetam, Olea europaea, Malus domestica and their combination was evaluated by performing functional outcome tests i.e. Cylinder, pasta, ladder run, pole and water maize tests. Rats were subjected to surgery after 21 days of treatment for analysis from stroke recovery. Olea europaea and Malus domestica in individual doses of 0.75ml/kg and 3ml/kg respectively showed neuroprotection by significant improvement in ladder run test (121.6± 0.92;128.2 ± 0.73) as compare to reference (125.4 ± 0.74). Both test doses showed significant neuroprotection as compare to reference (9.60 ± 0.50) in pasta test (8.40 ± 0.24;9.80 ± 0.37) whereas with cylinder test, experimental groups showed significant increase in movements (6.60 ± 0.24; 8.40 ± 0.24) in contrast to reference (7.80 ± 0.37).There was a decrease in percentage time taken f to reach the hidden maize in water maize test (56.80 ± 0.58;61.80 ± 0.66) at doses 0.75ml/kg and 3ml/kg respectively as compare to piracetam (59.40 ± 1.07). Olea europaea and Malus domestica individually showed significant reduction in duration of mobility (127.0 ± 0.44; 123.0 ± 0.44) in pole test as compare to piracetam (124.0 ± 0.70). Histopathological analysis revealed the significant extent of protection from ischemia after prophylactic treatments. Hence it is concluded that Olea europaea and Malus domestica are effective neuroprotective agents alone as compare to their combination.

Keywords: ischemia, Malus domestica, neuroprotection, Olea europaea

Procedia PDF Downloads 116
886 Sediment Wave and Cyclic Steps as Mechanism for Sediment Transport in Submarine Canyons Thalweg

Authors: Taiwo Olusoji Lawrence, Peace Mawo Aaron

Abstract:

Seismic analysis of bedforms has proven to be one of the best ways to study deepwater sedimentary features. Canyons are known to be sediment transportation conduit. Sediment wave are large-scale depositional bedforms in various parts of the world's oceans formed predominantly by suspended load transport. These undulating objects usually have tens of meters to a few kilometers in wavelength and a height of several meters. Cyclic steps have long long-wave upstream-migrating bedforms confined by internal hydraulic jumps. They usually occur in regions with high gradients and slope breaks. Cyclic steps and migrating sediment waves are the most common bedform on the seafloor. Cyclic steps and related sediment wave bedforms are significant to the morpho-dynamic evolution of deep-water depositional systems architectural elements, especially those located along tectonically active margins with high gradients and slope breaks that can promote internal hydraulic jumps in turbidity currents. This report examined sedimentary activities and sediment transportation in submarine canyons and provided distinctive insight into factors that created a complex seabed canyon system in the Ceara Fortaleza basin Brazilian Equatorial Margin (BEM). The growing importance of cyclic steps made it imperative to understand the parameters leading to their formation, migration, and architecture as well as their controls on sediment transport in canyon thalweg. We extracted the parameters of the observed bedforms and evaluated the aspect ratio and asymmetricity. We developed a relationship between the hydraulic jump magnitude, depth of the hydraulic fall and the length of the cyclic step therein. It was understood that an increase in the height of the cyclic step increases the magnitude of the hydraulic jump and thereby increases the rate of deposition on the preceding stoss side. An increase in the length of the cyclic steps reduces the magnitude of the hydraulic jump and reduces the rate of deposition at the stoss side. Therefore, flat stoss side was noticed at most preceding cyclic step and sediment wave.

Keywords: Ceara Fortaleza, submarine canyons, cyclic steps, sediment wave

Procedia PDF Downloads 99
885 Recycling Waste Product for Metal Removal from Water

Authors: Saidur R. Chowdhury, Mamme K. Addai, Ernest K. Yanful

Abstract:

The research was performed to assess the potential of nickel smelter slag, an industrial waste, as an adsorbent in the removal of metals from aqueous solution. An investigation was carried out for Arsenic (As), Copper (Cu), lead (Pb) and Cadmium (Cd) adsorption from aqueous solution. Smelter slag was obtain from Ni ore at the Vale Inco Ni smelter in Sudbury, Ontario, Canada. The batch experimental studies were conducted to evaluate the removal efficiencies of smelter slag. The slag was characterized by surface analytical techniques. The slag contained different iron oxides and iron silicate bearing compounds. In this study, the effect of pH, contact time, particle size, competition by other ions, slag dose and distribution coefficient were evaluated to measure the optimum adsorption conditions of the slag as an adsorbent for As, Cu, Pb and Cd. The results showed 95-99% removal of As, Cu, Pb, and almost 50-60% removal of Cd, while batch experimental studies were conducted at 5-10 mg/L of initial concentration of metals, 10 g/L of slag doses, 10 hours of contact time and 170 rpm of shaking speed and 25oC condition. The maximum removal of Arsenic (As), Copper (Cu), lead (Pb) was achieved at pH 5 while the maximum removal of Cd was found after pH 7. The column experiment was also conducted to evaluate adsorption depth and service time for metal removal. This study also determined adsorption capacity, adsorption rate and mass transfer rate. The maximum adsorption capacity was found to be 3.84 mg/g for As, 4 mg/g for Pb, and 3.86 mg/g for Cu. The adsorption capacity of nickel slag for the four test metals were in decreasing order of Pb > Cu > As > Cd. Modelling of experimental data with Visual MINTEQ revealed that saturation indices of < 0 were recorded in all cases suggesting that the metals at this pH were under- saturated and thus in their aqueous forms. This confirms the absence of precipitation in the removal of these metals at the pHs. The experimental results also showed that Fe and Ni leaching from the slag during the adsorption process was found to be very minimal, ranging from 0.01 to 0.022 mg/L indicating the potential adsorbent in the treatment industry. The study also revealed that waste product (Ni smelter slag) can be used about five times more before disposal in a landfill or as a stabilization material. It also highlighted the recycled slags as a potential reactive adsorbent in the field of remediation engineering. It also explored the benefits of using renewable waste products for the water treatment industry.

Keywords: adsorption, industrial waste, recycling, slag, treatment

Procedia PDF Downloads 128
884 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos

Authors: Nassima Noufail, Sara Bouhali

Abstract:

In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.

Keywords: video segmentation, action detection, classification, Kmeans, C3D

Procedia PDF Downloads 59
883 The Relationship between Caregiver Burden and Life Satisfaction of Caregivers of Elderly Individuals

Authors: Guler Duru Asiret, Cemile Kutmec Yilmaz, Gulcan Bagcivan, Tugce Turten Kaymaz

Abstract:

This descriptive study was conducted to determine the relationship between caregiver burden and life satisfaction who give home care to elderly individuals. The sample was recruited from the internal medicine unit and palliative unit of a state hospital located in Turkey on June 2016-2017. The study sample consisted of 231 primary caregiver family member, who met the eligibility criteria and agreed to participate in the study. The inclusion criteria were as follows: inpatient’s caregiver, primary caregiver for at least 3 months, at least 18 years of age, no communication problem or mental disorder. Data were gathered using an Information Form prepared by the researchers based on previous literature, the Zarit Burden Interview (ZBI), and the Satisfaction with Life Scale (SWLS). The data were analyzed using IBM SPSS Statistics software version 20.0 (SPSS, Chicago, IL). The descriptive characteristics of the participant were analyzed using number, percentage, mean and standard deviation. The suitability of normal distribution of scale scores was analyzed using Kolmogorov-Smirnov and Shapiro-Wilk test. Relationships between scales were analyzed using Spearman’s rank-correlation coefficient. P values less than 0.05 were considered to be significant. The average age of the caregivers was 50.11±13.46 (mean±SD) years. Of the caregivers, 76.2% were women, 45% were primary school graduates, 89.2% were married, 38.1% were the daughters of their patients. Among these, 52.4% evaluated their income level to be good. Of them, 53.6% had been giving care less than 2 years. The patients’ average age was 77.1±8.0 years. Of the patients, 55.8% were women, 56.3% were illeterate, 70.6% were married, and 97.4% had at least one chronic disease. The mean Zarit Burden Interview score was 35.4±1.5 and the Satisfaction with Life Scale score was 20.6±6.8. A negative relationship was found between the patients’ score average on the ZBI, and on the SWLS (r= -0.438, p=0.000). The present study determined that the caregivers have a moderate caregiver burden and the life satisfaction. And the life satisfaction of caregivers decreased as their caregiver burden increase. In line with the results obtained from the research, it is recommended that to increase the effectiveness of discharge training, to arrange training and counseling programs for caregivers to cope with the problems they experienced, to monitor the caregivers at regular intervals and to provide necessary institutional support.

Keywords: caregiver burden, family caregivers, nurses, satisfaction

Procedia PDF Downloads 154
882 Acid Soil Amelioration Using Coal Bio-Briquette Ash and Waste Concrete in China

Authors: Y. Sakai, C. Wang

Abstract:

The decrease in agricultural production due to soil deterioration has been an urgent task. Soil acidification is a potentially serious land degradation issue and it will have a major impact on agricultural productivity and sustainable farming systems. In China, acid soil is mainly distributed in the southern part, the decrease in agricultural production and heavy metal contamination are serious problems. In addition, not only environmental and health problems due to the exhaust gas such as mainly sulfur dioxide (SO₂) but also the generation of a huge amount of construction and demolition wastes with the accelerating urbanization has emerged as a social problem in China. Therefore, the need for the recycling and reuse of both desulfurization waste and waste concrete is very urgent and necessary. So we have investigated the effectiveness as acid soil amendments of both coal bio-briquette ash and waste concrete. In this paper, acid soil (AS1) in Nanjing (pH=6.0, EC=1.6dSm-1) and acid soil (AS2) in Guangzhou (pH=4.1, EC=0.2dSm-1) were investigated in soil amelioration test. Soil amendments were three coal bio-briquette ashes (BBA1, BBA2 and BBA3), the waste cement fine powders (CFP) ( < 200µm (particle diameter)), waste concrete particles (WCP) ( < 4.75mm ( < 0.6mm, 0.6-1.0mm, 1.0-2.0mm, 2.0-4.75mm)), and six mixtures with two coal bio-briquette ashes (BBA2 and BBA3), CFP, WCP( < 0.6mm) and WCP(2.0-4.75mm). In acid soil amelioration test, the three BBAs, CFP and various WCPs based on exchangeable calcium concentration were added to two acid soils. The application rates were from 0 wt% to 3.5 wt% in AS1 test and from 0 wt% to 6.0 wt% in AS2 test, respectively. Soil chemical properties (pH, EC, exchangeable and soluble ions (Na, Ca, Mg, K)) before and after mixing with soil amendments were measured. In addition, Al toxicity and the balance of salts (CaO, K₂O, MgO) in soil after amelioration was evaluated. The order of pH and exchangeable Ca concentration that is effective for acid soil amelioration was WCP(0.6mm) > CFP > WCP(2.0-4.25mm) > BB1 > BB2 > BB3. In all AS 1 and AS 2 amelioration tests using three BBAs, the pH and EC increased slightly with the increase of application rate and reached to the appropriate value range of both pH and EC in BBA1 only. Because BBA1 was higher value in pH and exchangeable Ca. After that, soil pH and EC with the increase in the application rate of BBA2, BBA3 and by using CFP, WC( < 0.6mm), WC(2.0-4.75mm) as soil amendment reached to each appropriate value range, respectively. In addition, the mixture amendments with BBA2, BBA3 CFP, WC( < 0.6mm), and WC(2.0-4.75mm) could ameliorate at a smaller amount of application rate in case of BBA only. And the exchangeable Al concentration decreased drastically with the increase in pH due to soil amelioration and was under the standard value. Lastly, the heavy metal (Cd, As, Se, Ni, Cr, Pb, Mo, B, Cu, Zn) contents in new soil amendments were under control standard values for agricultural use in China. Thus we could propose a new acid soil amelioration method using coal bio-briquette ash and waste concrete in China.

Keywords: acid soil, coal bio-briquette ash, soil amelioration, waste concrete

Procedia PDF Downloads 172
881 GC-MS-Based Untargeted Metabolomics to Study the Metabolism of Pectobacterium Strains

Authors: Magdalena Smoktunowicz, Renata Wawrzyniak, Malgorzata Waleron, Krzysztof Waleron

Abstract:

Pectobacterium spp. were previously classified into the Erwinia genus founded in 1917 to unite at that time all Gram-negative, fermentative, nonsporulating and peritrichous flagellated plant pathogenic bacteria. After work of Waldee (1945), on Approved Lists of Bacterial Names and bacteriology manuals in 1980, they were described either under the species named Erwinia or Pectobacterium. The Pectobacterium genus was formally described in 1998 of 265 Pectobacterium strains. Currently, there are 21 species of Pectobacterium bacteria, including Pectobacterium betavasculorum since 2003, which caused soft rot on sugar beet tubers. Based on the biochemical experiments carried out for this, it is known that these bacteria are gram-negative, catalase-positive, oxidase-negative, facultatively anaerobic, using gelatin and causing symptoms of soft rot on potato and sugar beet tubers. The mere fact of growing on sugar beet may indicate a metabolism characteristic only for this species. Metabolomics, broadly defined as the biology of the metabolic systems, which allows to make comprehensive measurements of metabolites. Metabolomics, in combination with genomics, are complementary tools for the identification of metabolites and their reactions, and thus for the reconstruction of metabolic networks. The aim of this study was to apply the GC-MS-based untargeted metabolomics to study the metabolism of P. betavasculorum in different growing conditions. The metabolomic profiles of biomass and biomass media were determined. For sample preparation the following protocol was used: extraction with 900 µl of methanol: chloroform: water mixture (10: 3: 1, v: v) were added to 900 µl of biomass from the bottom of the tube and up to 900 µl of nutrient medium from the bacterial biomass. After centrifugation (13,000 x g, 15 min, 4oC), 300µL of the obtained supernatants were concentrated by rotary vacuum and evaporated to dryness. Afterwards, two-step derivatization procedure was performed before GC-MS analyses. The obtained results were subjected to statistical calculations with the use of both uni- and multivariate tests. The obtained results were evaluated using KEGG database, to asses which metabolic pathways are activated and which genes are responsible for it, during the metabolism of given substrates contained in the growing environment. The observed metabolic changes, combined with biochemical and physiological tests, may enable pathway discovery, regulatory inference and understanding of the homeostatic abilities of P. betavasculorum.

Keywords: GC-MS chromatograpfy, metabolomics, metabolism, pectobacterium strains, pectobacterium betavasculorum

Procedia PDF Downloads 53
880 Characterization, Replication and Testing of Designed Micro-Textures, Inspired by the Brill Fish, Scophthalmus rhombus, for the Development of Bioinspired Antifouling Materials

Authors: Chloe Richards, Adrian Delgado Ollero, Yan Delaure, Fiona Regan

Abstract:

Growing concern about the natural environment has accelerated the search for non-toxic, but at the same time, economically reasonable, antifouling materials. Bioinspired surfaces, due to their nano and micro topographical antifouling capabilities, provide a hopeful approach to the design of novel antifouling surfaces. Biological organisms are known to have highly evolved and complex topographies, demonstrating antifouling potential, i.e. shark skin. Previous studies have examined the antifouling ability of topographic patterns, textures and roughness scales found on natural organisms. One of the mechanisms used to explain the adhesion of cells to a substrate is called attachment point theory. Here, the fouling organism experiences increased attachment where there are multiple attachment points and reduced attachment, where the number of attachment points are decreased. In this study, an attempt to characterize the microtopography of the common brill fish, Scophthalmus rhombus, was undertaken. Scophthalmus rhombus is a small flatfish of the family Scophthalmidae, inhabiting regions from Norway to the Mediterranean and the Black Sea. They reside in shallow sandy and muddy coastal areas at depths of around 70 – 80 meters. Six engineered surfaces (inspired by the Brill fish scale) produced by a 2-photon polymerization (2PP) process were evaluated for their potential as an antifouling solution for incorporation onto tidal energy blades. The micro-textures were analyzed for their AF potential under both static and dynamic laboratory conditions using two laboratory grown diatom species, Amphora coffeaeformis and Nitzschia ovalis. The incorporation of a surface topography was observed to cause a disruption in the growth of A. coffeaeformis and N. ovalis cells on the surface in comparison to control surfaces. This work has demonstrated the importance of understanding cell-surface interaction, in particular, topography for the design of novel antifouling technology. The study concluded that biofouling can be controlled by physical modification, and has contributed significant knowledge to the use of a successful novel bioinspired AF technology, based on Brill, for the first time.

Keywords: attachment point theory, biofouling, Scophthalmus rhombus, topography

Procedia PDF Downloads 85
879 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning

Procedia PDF Downloads 134
878 Removal of Methylene Blue from Aqueous Solution by Adsorption onto Untreated Coffee Grounds

Authors: N. Azouaou, H. Mokaddem, D. Senadjki, K. Kedjit, Z. Sadaoui

Abstract:

Introduction: Water contamination caused by dye industries, including food, leather, textile, plastic, cosmetics, paper-making, printing and dye synthesis, has caused more and more attention, since most dyes are harmful to human being and environments. Untreated coffee grounds were used as a high-efficiency adsorbent for the removal of a cationic dye (methylene blue, MB) from aqueous solution. Characterization of the adsorbent was performed using several techniques such as SEM, surface area (BET), FTIR and pH zero charge. The effects of contact time, adsorbent dose, initial solution pH and initial concentration were systematically investigated. Results showed the adsorption kinetics followed the pseudo-second-order kinetic model. Langmuir isotherm model is in good agreement with the experimental data as compared to Freundlich and D–R models. The maximum adsorption capacity was found equal to 52.63mg/g. In addition, the possible adsorption mechanism was also proposed based on the experimental results. Experimental: The adsorption experiments were carried out in batch at room temperature. A given mass of adsorbent was added to methylene blue (MB) solution and the entirety was agitated during a certain time. The samples were carried out at quite time intervals. The concentrations of MB left in supernatant solutions after different time intervals were determined using a UV–vis spectrophotometer. The amount of MB adsorbed per unit mass of coffee grounds (qt) and the dye removal efficiency (R %) were evaluated. Results and Discussion: Some chemical and physical characteristics of coffee grounds are presented and the morphological analysis of the adsorbent was also studied. Conclusions: The good capacity of untreated coffee grounds to remove MB from aqueous solution was demonstrated in this study, highlighting its potential for effluent treatment processes. The kinetic experiments show that the adsorption is rapid and maximum adsorption capacities qmax= 52.63mg/g achieved in 30min. The adsorption process is a function of the adsorbent concentration, pH and metal ion concentration. The optimal parameters found are adsorbent dose m=5g, pH=5 and ambient temperature. FTIR spectra showed that the principal functional sites taking part in the sorption process included carboxyl and hydroxyl groups.

Keywords: adsorption, methylene blue, coffee grounds, kinetic study

Procedia PDF Downloads 210
877 The Paradox of Design Aesthetics and the Sustainable Design

Authors: Asena Demirci, Gozen Guner Aktaş, Nur Ayalp

Abstract:

Nature provides a living space for humans, also in contrast it is destroyed by humans for their personal needs and ambitions. For decreasing these damages against nature, solutions are started to generate and to develop. Moreover, precautions are implemented. After 1960s, especially when the ozone layer got harmed and got thinner by toxic substances coming from man made structures, environmental problems which effected human’s activities of daily living. Thus, this subject about environmental solutions and precautions is becoming a priority issue for scientists. Most of the environmental problems are caused by buildings and factories which are built without any concerns about protecting nature. This situation creates awareness about environmental issues and also the terms like sustainability, Renewable energy show up in building, Construction and architecture sectors to provide environmental protection. In this perspective, the design disciplines also should be respectful to nature and the sustainability. Designs which involve the features like sustainability, renewability and being ecologic have specialties to be less detrimental to the environment rather than the designs which do not involve. Furthermore, these designs produce their own energy for consuming, So they do not use the natural resources. They do not contain harmful substances and they are made of recyclable materials. Thus, they are becoming environmentally friendly structures. There is a common concern among designers about the issue of sustainable design. They believe that the idea of sustainability inhibits the creativity. All works of design resemble each other from the point of aesthetics and technological matters. In addition, there is a concern about design ethics which aesthetic designs cannot be accepted as a priority. For these reasons, there are few designs included the features of being eco-friendly and well-designed and also had design concerns around the world. Despite the other design disciplines, The concept of sustainability is getting more important each day in interior architecture and interior design. As it is known that human being spends 90 % of his life in interior spaces, The importance of that concept in interior spaces is obvious. Aesthetic is another vital concern in interior space design also. Most of the time sustainable materials and sustainable interior design applications conflicts with personal aesthetic parameters. This study aims to discuss the great paradox between the design aesthetic and the sustainable design. Does the sustainable approach in interior design disturbs the design aesthetic? This is one of the most popular questions that have been discussed for a while. With this paper this question will be evaluated with a case study which analyzes the aesthetic perceptions and preferences of the users and designers in sustainable interior spaces.

Keywords: aesthetics, interior design, sustainable design, sustainability

Procedia PDF Downloads 270
876 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 254
875 Perspectives and Challenges a Functional Bread With Yeast Extract to Improve Human Diet

Authors: Cláudia Patrocínio, Beatriz Fernandes, Ana Filipa Pires

Abstract:

Background: Mirror therapy (MT) is used to improve motor function after stroke. During MT, a mirror is placed between the two upper limbs (UL), thus reflecting movements of the non- affected side as if it were the affected side. Objectives: The aim of this review is to analyze the evidence on the effec.tiveness of MT in the recovery of UL function in population with post chronic stroke. Methods: The literature search was carried out in PubMed, ISI Web of Science, and PEDro database. Inclusion criteria: a) studies that include individuals diagnosed with stroke for at least 6 months; b) intervention with MT in UL or comparing it with other interventions; c) articles published until 2023; d) articles published in English or Portuguese; e) randomized controlled studies. Exclusion criteria: a) animal studies; b) studies that do not provide a detailed description of the intervention; c) Studies using central electrical stimulation. The methodological quality of the included studies was assessed using the Physiotherapy Evidence Database (PEDro) scale. Studies with < 4 on PEDro scale were excluded. Eighteen studies met all the inclusion criteria. Main results and conclusions: The quality of the studies varies between 5 and 8. One article compared muscular strength training (MST) with MT vs without MT and four articles compared the use of MT vs conventional therapy (CT), one study compared extracorporeal shock therapy (EST) with and without MT and another study compared functional electrical stimulation (FES), MT and biofeedback, three studies compared MT with Mesh Glove (MG) or Sham Therapy, five articles compared performing bimanual exercises with and without MT and three studies compared MT with virtual reality (VR) or robot training (RT). The assessment of changes in function and structure (International Classification of Functioning, Disability and Health parameter) was carried out, in each article, mainly using the Fugl Meyer Assessment-Upper Limb scale, activity and participation (International Classification of Functioning, Disability and Health parameter) were evaluated using different scales, in each study. The positive results were seen in these parameters, globally. Results suggest that MT is more effective than other therapies in motor recovery and function of the affected UL, than these techniques alone, although the results have been modest in most of the included studies. There is also a more significant improvement in the distal movements of the affected hand than in the rest of the UL.

Keywords: physical therapy, mirror therapy, chronic stroke, upper limb, hemiplegia

Procedia PDF Downloads 35
874 Characterization of WNK2 Role on Glioma Cells Vesicular Traffic

Authors: Viviane A. O. Silva, Angela M. Costa, Glaucia N. M. Hajj, Ana Preto, Aline Tansini, Martin Roffé, Peter Jordan, Rui M. Reis

Abstract:

Autophagy is a recycling and degradative system suggested to be a major cell death pathway in cancer cells. Autophagy pathway is interconnected with the endocytosis pathways sharing the same ultimate lysosomal destination. Lysosomes are crucial regulators of cell homeostasis, responsible to downregulate receptor signalling and turnover. It seems highly likely that derailed endocytosis can make major contributions to several hallmarks of cancer. WNK2, a member of the WNK (with-no-lysine [K]) subfamily of protein kinases, had been found downregulated by its promoter hypermethylation, and has been proposed to act as a specific tumour-suppressor gene in brain tumors. Although some contradictory studies indicated WNK2 as an autophagy modulator, its role in cancer cell death is largely unknown. There is also growing evidence for additional roles of WNK kinases in vesicular traffic. Aim: To evaluate the role of WNK2 in autophagy and endocytosis on glioma context. Methods: Wild-type (wt) A172 cells (WNK2 promoter-methylated), and A172 transfected either with an empty vector (Ev) or with a WNK2 expression vector, were used to assess the cellular basal capacities to promote autophagy, through western blot and flow-cytometry analysis. Additionally, we evaluated the effect of WNK2 on general endocytosis trafficking routes by immunofluorescence. Results: The re-expression of ectopic WNK2 did not interfere with autophagy-related protein light chain 3 (LC3-II) expression levels as well as did not promote mTOR signaling pathway alteration when compared with Ev or wt A172 cells. However, the restoration of WNK2 resulted in a marked increase (8 to 92,4%) of Acidic Vesicular Organelles formation (AVOs). Moreover, our results also suggest that WNK2 cells promotes delay in uptake and internalization rate of cholera toxin B and transferrin ligands. Conclusions: The restoration of WNK2 interferes in vesicular traffic during endocytosis pathway and increase AVOs formation. This results also suggest the role of WNK2 in growth factor receptor turnover related to cell growth and homeostasis and associates one more time, WNK2 silencing contribution in genesis of gliomas.

Keywords: autophagy, endocytosis, glioma, WNK2

Procedia PDF Downloads 355
873 Modelling Soil Inherent Wind Erodibility Using Artifical Intellligent and Hybrid Techniques

Authors: Abbas Ahmadi, Bijan Raie, Mohammad Reza Neyshabouri, Mohammad Ali Ghorbani, Farrokh Asadzadeh

Abstract:

In recent years, vast areas of Urmia Lake in Dasht-e-Tabriz has dried up leading to saline sediments exposure on the surface lake coastal areas being highly susceptible to wind erosion. This study was conducted to investigate wind erosion and its relevance to soil physicochemical properties and also modeling of wind erodibility (WE) using artificial intelligence techniques. For this purpose, 96 soil samples were collected from 0-5 cm depth in 414000 hectares using stratified random sampling method. To measure the WE, all samples (<8 mm) were exposed to 5 different wind velocities (9.5, 11, 12.5, 14.1 and 15 m s-1 at the height of 20 cm) in wind tunnel and its relationship with soil physicochemical properties was evaluated. According to the results, WE varied within the range of 76.69-9.98 (g m-2 min-1)/(m s-1) with a mean of 10.21 and coefficient of variation of 94.5% showing a relatively high variation in the studied area. WE was significantly (P<0.01) affected by soil physical properties, including mean weight diameter, erodible fraction (secondary particles smaller than 0.85 mm) and percentage of the secondary particle size classes 2-4.75, 1.7-2 and 0.1-0.25 mm. Results showed that the mean weight diameter, erodible fraction and percentage of size class 0.1-0.25 mm demonstrated stronger relationship with WE (coefficients of determination were 0.69, 0.67 and 0.68, respectively). This study also compared efficiency of multiple linear regression (MLR), gene expression programming (GEP), artificial neural network (MLP), artificial neural network based on genetic algorithm (MLP-GA) and artificial neural network based on whale optimization algorithm (MLP-WOA) in predicting of soil wind erodibility in Dasht-e-Tabriz. Among 32 measured soil variable, percentages of fine sand, size classes of 1.7-2.0 and 0.1-0.25 mm (secondary particles) and organic carbon were selected as the model inputs by step-wise regression. Findings showed MLP-WOA as the most powerful artificial intelligence techniques (R2=0.87, NSE=0.87, ME=0.11 and RMSE=2.9) to predict soil wind erodibility in the study area; followed by MLP-GA, MLP, GEP and MLR and the difference between these methods were significant according to the MGN test. Based on the above finding MLP-WOA may be used as a promising method to predict soil wind erodibility in the study area.

Keywords: wind erosion, erodible fraction, gene expression programming, artificial neural network

Procedia PDF Downloads 49