Search results for: alignment
108 Enhancing Cultural Heritage Data Retrieval by Mapping COURAGE to CIDOC Conceptual Reference Model
Authors: Ghazal Faraj, Andras Micsik
Abstract:
The CIDOC Conceptual Reference Model (CRM) is an extensible ontology that provides integrated access to heterogeneous and digital datasets. The CIDOC-CRM offers a “semantic glue” intended to promote accessibility to several diverse and dispersed sources of cultural heritage data. That is achieved by providing a formal structure for the implicit and explicit concepts and their relationships in the cultural heritage field. The COURAGE (“Cultural Opposition – Understanding the CultuRal HeritAGE of Dissent in the Former Socialist Countries”) project aimed to explore methods about socialist-era cultural resistance during 1950-1990 and planned to serve as a basis for further narratives and digital humanities (DH) research. This project highlights the diversity of flourished alternative cultural scenes in Eastern Europe before 1989. Moreover, the dataset of COURAGE is an online RDF-based registry that consists of historical people, organizations, collections, and featured items. For increasing the inter-links between different datasets and retrieving more relevant data from various data silos, a shared federated ontology for reconciled data is needed. As a first step towards these goals, a full understanding of the CIDOC CRM ontology (target ontology), as well as the COURAGE dataset, was required to start the work. Subsequently, the queries toward the ontology were determined, and a table of equivalent properties from COURAGE and CIDOC CRM was created. The structural diagrams that clarify the mapping process and construct queries are on progress to map person, organization, and collection entities to the ontology. Through mapping the COURAGE dataset to CIDOC-CRM ontology, the dataset will have a common ontological foundation with several other datasets. Therefore, the expected results are: 1) retrieving more detailed data about existing entities, 2) retrieving new entities’ data, 3) aligning COURAGE dataset to a standard vocabulary, 4) running distributed SPARQL queries over several CIDOC-CRM datasets and testing the potentials of distributed query answering using SPARQL. The next plan is to map CIDOC-CRM to other upper-level ontologies or large datasets (e.g., DBpedia, Wikidata), and address similar questions on a wide variety of knowledge bases.Keywords: CIDOC CRM, cultural heritage data, COURAGE dataset, ontology alignment
Procedia PDF Downloads 148107 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 298106 Cysteine Proteases of Plants That Act on the Coagulation Cascade: Approach from Bioinformatics
Authors: Tapiwa Brine Mutsauri
Abstract:
The MEROPS system is an information resource for proteases that classifies them into clans according to their catalytic type. Within the Plant kingdom, cysteine proteases are one of the best known, as they are the catalytic type on which the first studies on plant proteases were focused. Plant cysteine proteases have a similar mechanism of action to serine proteases, and some are known to have activity on factors of the blood coagulation cascade, such as a potent antithrombotic effect, and also cause increased fibrinolysis. Of a few plant cysteine proteases, the three-dimensional structure is known, so a method of interest to be able to predict their potential activity on the factors of the coagulation cascade would be to know their structure. Phylogenetics is the study of the evolutionary relationships between biological entities, often species, individuals, or genes (which can be called taxa). It is essential to identify the evolutionary position and the possible distribution of these enzymes in the plant kingdom, particularly those that act on coagulation factors. Bioinformatic tools, such as Clustal Omega / Jalview and Mega6, can be used to create phylogenetic trees. From the results of the alignment, it can be seen that although there is a certain degree of conservation (Conservation) and consensus (Consensus) among the eleven sequences, the functionally important motifs (those corresponding to the active site), the degree of conservation and consensus is very low. We could then infer that although activity on coagulation is reported for these enzymes, linked to their structural and mechanistic similarity with serine proteases, this activity may not have a direct or primary relationship with the proteolytic activity associated with their common, poorly conserved active site in this case. This ultimately could be related to modifications in the reaction mechanism of several of the enzymes studied, which would require more detailed study. Also, below, we will deal with factors that may have a greater influence on this result. The results of this work enrich the understanding of how species (and molecular sequences in general) evolve. Through phylogenetics, we learn not only how sequences came to be the way they are today but also the general principles that allow us to predict how they will change in the future. For pharmaceutical sciences, phylogenetic selection of biologically related species can help identify closely related members of a species with compounds of pharmacological importance, such as plant cysteine proteases, in addition to identifying structural features that may influence their pharmacological activity and which can be valuable for drug design.Keywords: computational simulation, proteases, coagulation, bioinformatics
Procedia PDF Downloads 23105 Improved Intracellular Protein Degradation System for Rapid Screening and Quantitative Study of Essential Fungal Proteins in Biopharmaceutical Development
Authors: Patarasuda Chaisupa, R. Clay Wright
Abstract:
The selection of appropriate biomolecular targets is a crucial aspect of biopharmaceutical development. The Auxin-Inducible Degron Degradation (AID) technology has demonstrated remarkable potential in efficiently and rapidly degrading target proteins, thereby enabling the identification and acquisition of drug targets. The AID system also offers a viable method to deplete specific proteins, particularly in cases where the degradation pathway has not been exploited or when the adaptation of proteins, including the cell environment, occurs to compensate for the mutation or gene knockout. In this study, we have engineered an improved AID system tailored to deplete proteins of interest. This AID construct combines the auxin-responsive E3 ubiquitin ligase binding domain, AFB2, and the substrate degron, IAA17, fused to the target genes. Essential genes of fungi with the lowest percent amino acid similarity to human and plant orthologs, according to the Basic Local Alignment Search Tool (BLAST), were cloned into the AID construct in S. cerevisiae (AID-tagged strains) using a modular yeast cloning toolkit for multipart assembly and direct genetic modification. Each E3 ubiquitin ligase and IAA17 degron was fused to a fluorescence protein, allowing for real-time monitoring of protein levels in response to different auxin doses via cytometry. Our AID system exhibited high sensitivity, with an EC50 value of 0.040 µM (SE = 0.016) for AFB2, enabling the specific promotion of IAA17::target protein degradation. Furthermore, we demonstrate how this improved AID system enhances quantitative functional studies of various proteins in fungi. The advancements made in auxin-inducible protein degradation in this study offer a powerful approach to investigating critical target protein viability in fungi, screening protein targets for drugs, and regulating intracellular protein abundance, thus revolutionizing the study of protein function underlying a diverse range of biological processes.Keywords: synthetic biology, bioengineering, molecular biology, biotechnology
Procedia PDF Downloads 92104 Tonal Pitch Structure as a Tool of Social Consolidation
Authors: Piotr Podlipniak
Abstract:
Social consolidation has often been indicated as an adaptive function of music which led to the evolution of music faculty. According to many scholars this function is possible thanks to musical rhythm that enables sensorimotor synchronization to a musical beat. The ability to synchronize to music allows performing music collectively which enhances social cohesion. However, the collective performance of music consists also in spectral synchronization that depends on musical pitch structure. Similarly to rhythmic synchronization, spectral synchronization is a result of ‘brain states alignment’ between people who collectively listen to or perform music. In order to successfully synchronize pitches performers have to adequately expect the pitch structure. The most common form of music which predominates among all human societies is tonal music. In fact tonality understood in the broadest sense as such an organization of musical pitches in which some pitch is more important than others is the only kind of musical pitch structure that has been observed in all currently known musical cultures. The perception of such a musical pitch structure elicits specific emotional reactions which are often described as tensions and relaxations. These facts provoke some important questions. What is the evolutionary reason that people use pitch structure as a form of vocal communication? Why different pitch structures elicit different emotional states independent of extra-musical context? It is proposed in the current presentation that in the course of evolution pitch structure became a human specific tool of communication the function of which is to induce emotional states such as uncertainty and cohesion. By the means of eliciting these emotions during collective music performance people are able to unconsciously give cues concerning social acceptance. This is probably one of the reasons why in all cultures people collectively perform tonal music. It is also suggested that tonal pitch structure had been invented socially before it became an evolutionary innovation of Homo sapiens. It means that a predisposition to tonally organize pitches evolved by the means of ‘Baldwin effect’ – a process in which natural selection transforms the learned response of an organism into the instinctive response. The hypothetical evolutionary scenario of the emergence of tonal pitch structure will be proposed. In this scenario social forces such as a need for closer cooperation play the crucial role.Keywords: emotion, evolution, tonality, social consolidation
Procedia PDF Downloads 325103 Advancing Healthcare Excellence in China: Crafting a Strategic Operational Evaluation Index System for Chinese Hospital Departments amid Payment Reform Initiatives
Authors: Jing Jiang, Yuguang Gao, Yang Yu
Abstract:
Facing increasingly challenging insurance payment pressures, the Chinese healthcare system is undergoing significant transformations, akin to the implementation of DRG payment models by the United States' Medicare. Consequently, there is a pressing need for Chinese hospitals to establish optimizations in departmental operations tailored to the ongoing healthcare payment reforms. This abstract delineates the meticulous construction of a scientifically rigorous and comprehensive index system at the departmental level in China strategically aligned with the evolving landscape of healthcare payment reforms. Methodologically, it integrates key process areas and maturity assessment theories, synthesizing relevant literature and industry standards to construct a robust framework and indicator pool. Employing the Delphi method, consultations with 21 experts were conducted, revealing a collective demonstration of high enthusiasm, authority, and coordination in designing the index system. The resulting model comprises four primary indicators -technical capabilities, cost-effectiveness, operational efficiency, and disciplinary potential- supported by 14 secondary indicators and 23 tertiary indicators with varied coefficient adjustment for department types (platform or surgical). The application of this evaluation system in a Chinese hospital within the northeastern region yielded results aligning seamlessly with the actual operational scenario. In conclusion, the index system comprehensively considers the integrity and effectiveness of structural, process, and outcome indicators and stands as a comprehensive reflection of the collective expertise of the engaged experts, manifesting in a model designed to elevate the operational management of hospital departments. Its strategic alignment with healthcare payment reforms holds practical significance in guiding departmental development positioning, brand cultivation, and talent development.Keywords: Chinese healthcare system, Delphi method, departmental management, evaluation indicators, hospital operations, weight coefficients
Procedia PDF Downloads 66102 Rasagiline Improves Metabolic Function and Reduces Tissue Injury in the Substantia Nigra in Parkinson's Disease: A Longitudinal In-Vivo Advanced MRI Study
Authors: Omar Khan, Shana Krstevska, Edwin George, Veronica Gorden, Fen Bao, Christina Caon, NP-C, Carla Santiago, Imad Zak, Navid Seraji-Bozorgzad
Abstract:
Objective: To quantify cellular injury in the substantia nigra (SN) in patients with Parkinson's disease (PD) and to examine the effect of rasagiline of tissue injury in the SN in patients with PD. Background: N-acetylaspartate (NAA) quantified with MRS is a reliable marker of neuronal metabolic function. Fractional anisotropy (FA) and mean diffusivity (MD) obtained with DTI, characterize tissue alignment and integrity. Rasagline, has been shown to exert anti-apototic effect. We applied these advanced MRI techniques to examine: (i) the effect of rasagiline on cellular injury and metabolism in patients with early PD, and (ii) longitudinal changes seen over time in PD. Methods: We conducted a prospective longitudinal study in patients with mild PD, naive to dopaminergic treatment. The imaging protocol included multi-voxel proton-MRS and DTI of the SN, acquired on a 3T scanner. Scans were performed at baseline and month 3, during which the patient was on no treatment. At that point, rasagiline 1 mg orally daily was initiated and MRI scans are were obtained at 6 and 12 months after starting rasagiline. The primary objective was to compare changes during the 3-month period of “no treatment” to the changes observed “on treatment” with rasagiline at month 12. Age-matched healthy controls were also imaged. Image analysis was performed blinded to treatment allocation and period. Results: 25 patients were enrolled in this study. Compared to the period of “no treatment”, there was significant increase in the NAA “on treatment” period (-3.04 % vs +10.95 %, p= 0.0006). Compared to the period of “no treatment”, there was significant increase in following 12 month in the FA “on treatment” (-4.8% vs +15.3%, p<0.0001). The MD increased during “no treatment” and decreased in “on treatment” (+2.8% vs -7.5%, p=0.0056). Further analysis and clinical correlation are ongoing. Conclusions: Advanced MRI techniques quantifying cellular injury in the SN in PD is a feasible approach to investigate dopaminergic neuronal injury and could be developed as an outcome in exploratory studies. Rasagiline appears to have a stabilizing effect on dopaminergic cell loss and metabolism in the SN in PD, that warrants further investigation in long-term studies.Keywords: substantia nigra, Parkinson's disease, MRI, neuronal loss, biomarker
Procedia PDF Downloads 318101 D-Lysine Assisted 1-Ethyl-3-(3-Dimethylaminopropyl)Carbodiimide / N-Hydroxy Succinimide Initiated Crosslinked Collagen Scaffold with Controlled Structural and Surface Properties
Authors: G. Krishnamoorthy, S. Anandhakumar
Abstract:
The effect of D-Lysine (D-Lys) on collagen with 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide(EDC)/N-hydroxysuccinimide(NHS) initiated cross linking using experimental and modelling tools are evaluated. The results of the Coll-D-Lys-EDC/NHS scaffold also indicate an increase in the tensile strength (TS), percentage of elongation (% E), denaturation temperature (Td), and decrease the decomposition rate compared to L-Lys-EDC/NHS. Scanning electron microscopic (SEM) and atomic force microscopic (AFM) analyses revealed a well ordered with properly oriented and well-aligned structure of scaffold. The D-Lys stabilizes the scaffold against degradation by collagenase than L-Lys. The cell assay showed more than 98% fibroblast viability (NIH3T3) and improved cell adhesions, protein adsorption after 72h of culture when compared with native scaffold. Cell attachment after 74h was robust, with cytoskeletal analysis showing that the attached cells were aligned along the fibers assuming a spindle-shape appearance, despite, gene expression analyses revealed no apparent alterations in mRNA levels, although cell proliferation was not adversely affected. D-Lysine (D-Lys) plays a pivotal role in the self-assembly and conformation of collagen fibrils. The D-Lys assisted EDC/NHS initiated cross-linking induces the formation of an carboxamide by the activation of the side chain -COOH group, followed by aminolysis of the O-iso acylurea intermediates by the -NH2 groups are directly joined via an isopeptides bond. This leads to the formation of intra- and inter-helical cross links. Modeling studies indicated that D-Lys bind with collagen-like peptide (CLP) through multiple H-bonding and hydrophobic interactions. Orientational changes in collagenase on CLP-D-Lys are observed which may decrease its accessibility to degradation and stabilize CLP against the action of the former. D-Lys has lowest binding energy and improved fibrillar-assembly and staggered alignment without the undesired structural stiffness and aggregations. The proteolytic machinery is not well equipped to deal with Coll-D-Lys than Coll-L-Lys scaffold. The information derived from the present study could help in designing collagenolytically stable heterochiral collagen based scaffold for biomedical applications.Keywords: collagen, collagenase, collagen like peptide, D-lysine, heterochiral collagen scaffold
Procedia PDF Downloads 393100 Design and Construction Demeanor of a Very High Embankment Using Geosynthetics
Authors: Mariya Dayana, Budhmal Jain
Abstract:
Kannur International Airport Ltd. (KIAL) is a new Greenfield airport project with airside development on an undulating terrain with an average height of 90m above Mean Sea Level (MSL) and a maximum height of 142m. To accommodate the desired Runway length and Runway End Safety Area (RESA) at both the ends along the proposed alignment, it resulted in 45.5 million cubic meters in cutting and filling. The insufficient availability of land for the construction of free slope embankment at RESA 07 end resulted in the design and construction of Reinforced Soil Slope (RSS) with a maximum slope of 65 degrees. An embankment fill of average 70m height with steep slopes located in high rainfall area is a unique feature of this project. The design and construction was challenging being asymmetrical with curves and bends. The fill was reinforced with high strength Uniaxial geogrids laid perpendicular to the slope. Weld mesh wrapped with coir mat acted as the facia units to protect it against surface failure. Face anchorage were also provided by wrapping the geogrids along the facia units where the slope angle was steeper than 45 degrees. Considering high rainfall received on this table top airport site, extensive drainage system was designed for the high embankment fill. Gabion wall up to 10m height were also designed and constructed along the boundary to accommodate the toe of the RSS fill beside the jeepable track at the base level. The design of RSS fill was done using ReSSA software and verified in PLAXIS 2D modeling. Both slip surface failure and wedge failure cases were considered in static and seismic analysis for local and global failure cases. The site won excavated laterite soil was used as the fill material for the construction. Extensive field and laboratory tests were conducted during the construction of RSS system for quality assurance. This paper represents a case study detailing the design and construction of a very high embankment using geosynthetics for the provision of Runway length and RESA area.Keywords: airport, embankment, gabion, high strength uniaxial geogrid, kial, laterite soil, plaxis 2d
Procedia PDF Downloads 16399 The Enhancement of Target Localization Using Ship-Borne Electro-Optical Stabilized Platform
Authors: Jaehoon Ha, Byungmo Kang, Kilho Hong, Jungsoo Park
Abstract:
Electro-optical (EO) stabilized platforms have been widely used for surveillance and reconnaissance on various types of vehicles, from surface ships to unmanned air vehicles (UAVs). EO stabilized platforms usually consist of an assembly of structure, bearings, and motors called gimbals in which a gyroscope is installed. EO elements such as a CCD camera and IR camera, are mounted to a gimbal, which has a range of motion in elevation and azimuth and can designate and track a target. In addition, a laser range finder (LRF) can be added to the gimbal in order to acquire the precise slant range from the platform to the target. Recently, a versatile functionality of target localization is needed in order to cooperate with the weapon systems that are mounted on the same platform. The target information, such as its location or velocity, needed to be more accurate. The accuracy of the target information depends on diverse component errors and alignment errors of each component. Specially, the type of moving platform can affect the accuracy of the target information. In the case of flying platforms, or UAVs, the target location error can be increased with altitude so it is important to measure altitude as precisely as possible. In the case of surface ships, target location error can be increased with obliqueness of the elevation angle of the gimbal since the altitude of the EO stabilized platform is supposed to be relatively low. The farther the slant ranges from the surface ship to the target, the more extreme the obliqueness of the elevation angle. This can hamper the precise acquisition of the target information. So far, there have been many studies on EO stabilized platforms of flying vehicles. However, few researchers have focused on ship-borne EO stabilized platforms of the surface ship. In this paper, we deal with a target localization method when an EO stabilized platform is located on the mast of a surface ship. Especially, we need to overcome the limitation caused by the obliqueness of the elevation angle of the gimbal. We introduce a well-known approach for target localization using Unscented Kalman Filter (UKF) and present the problem definition showing the above-mentioned limitation. Finally, we want to show the effectiveness of the approach that will be demonstrated through computer simulations.Keywords: target localization, ship-borne electro-optical stabilized platform, unscented kalman filter
Procedia PDF Downloads 52198 Exploration of Classic Models of Precipitation in Iran: A Case Study of Sistan and Baluchestan Province
Authors: Mohammad Borhani, Ahmad Jamshidzaei, Mehdi Koohsari
Abstract:
The study of climate has captivated human interest throughout history. In response to this fascination, individuals historically organized their daily activities in alignment with prevailing climatic conditions and seasonal variations. Understanding the elements and specific climatic parameters of each region, such as precipitation, which directly impacts human life, is essential because, in recent years, there has been a significant increase in heavy rainfall in various parts of the world attributed to the effects of climate change. Climate prediction models suggest a future scenario characterized by an increase in severe precipitation events and related floods on a global scale. This is a result of human-induced greenhouse gas emissions causing changes in the natural precipitation patterns. The Intergovernmental Panel on Climate Change reported global warming in 2001. The average global temperature has shown an increasing trend since 1861. In the 20th century, this increase has been between (0/2 ± 0/6) °C. The present study focused on examining the trend of monthly, seasonal, and annual precipitation in Sistan and Baluchestan provinces. The study employed data obtained from 13 precipitation measurement stations managed by the Iran Water Resources Management Company, encompassing daily precipitation records spanning the period from 1997 to 2016. The results indicated that the total monthly precipitation at the studied stations in Sistan and Baluchestan province follows a sinusoidal trend. The highest intense precipitation was observed in January, February, and March, while the lowest occurred in September, October, and then November. The investigation of the trend of seasonal precipitation in this province showed that precipitation follows an upward trend in the autumn season, reaching its peak in winter, and then shows a decreasing trend in spring and summer. Also, the examination of average precipitation indicated that the highest yearly precipitation occurred in 1997 and then in 2004, while the lowest annual precipitation took place between 1999 and 2001. The analysis of the annual precipitation trend demonstrates a decrease in precipitation from 1997 to 2016 in Sistan and Baluchestan province.Keywords: climate change, extreme precipitation, greenhouse gas, trend analysis
Procedia PDF Downloads 7297 An Analytical View of Albanian and French Legislation on Access to Health Care Benefits
Authors: Oljana Hoxhaj
Abstract:
The integration process of Albania into the European family carries many difficulties. In this context, the Albanian legislator is inclined to implement in the domestic legal framework models which have been successful in other countries. Our paper aims to present an analytical and comparative approach to the health system in Albania and France, mainly focusing on citizen’s access to these services. Different standards and cultures between states, in the context of an approximate model, will be the first challenge of our paper. Over the last few years, the Albanian government has undertaken concrete reforms in this sector, aiming to transform the vision on which the previous health system was structured. In this perspective, the state fulfills not only an obligation to its citizens, but also consolidates progressive steps toward alignment with European Union standards. The necessity to undertake a genuine reform in this area has come as an exigency of society, which has permanently identified problems within this sector, considering it ineffective, out of standards, and corrupt. The inclusion of health services on the Albanian government agenda reflects its will in the function of good governance, transparency, and broadening access to the provision of quality health services in the public and private sectors. The success of any initiative in the health system consists of giving priority to patient needs. Another objective that should be in the state's consideration is to create the premise to provide a comprehensive process on whose foundations partnership and broader co-operation with beneficiary entities are established in any decision-making that is directly related to their interests. Some other important and widespread impacts on the effective realization of citizens' access to the healthcare system coincide with the construction of appropriate infrastructure, increasing the professionalism and qualification of medical staff, and the allocation of a higher budget. France has one of the most effective healthcare models in Europe. That is why we have chosen to analyze this country, aiming to highlight the advantages of this system, as well as the commitment of the French state to drafting effective health policies. In the framework of the process of harmonization of the Albanian legislation with that of the European Union, through our work, we aim to identify the space to implement the whole of these legislative innovations in the Albanian legislation.Keywords: effective service, harmonization level, innovation, reform
Procedia PDF Downloads 11396 Structure, Bioinformatics Analysis and Substrate Specificity of a 6-Phospho-β-Glucosidase Glycoside Hydrolase 1 Enzyme from Bacillus licheniformis
Authors: Wayde Veldman, Ozlem T. Bishop, Igor Polikarpov
Abstract:
In bacteria, mono and disaccharides are phosphorylated during uptake into the cell via the widely used phosphoenolpyruvate (PEP)-dependent phosphotransferase transport system. As an initial step in the phosphorylated disaccharide metabolism pathway, certain glycoside hydrolase family 1 (GH1) enzymes play a crucial role in releasing phosphorylated and non-phosphorylated monosaccharides. However, structural determinants for the specificity of these enzymes still need to be clarified. GH1 enzymes are known to have a wide array of functions. According to the CAZy database, there are twenty-one different enzymatic activities in the GH1 family. Here, the structure and substrate specificity of a GH1 enzyme from Bacillus licheniformis, hereafter known as BlBglH, was investigated. The sequence of the enzyme BlBglH was compared to the sequences of other characterized GH1 enzymes using sequence alignment, sequence identity calculations, phylogenetic analysis, and motif discovery. Through these various analyses, BlBglH was found to have sequence features characteristic of the 6-phospho-β-glucosidase activity enzymes. Additionally, motif and structure comparisons of the three most commonly studied GH1 enzyme-activities revealed a shared loop amongst the different structures that consist of different sequence motifs – this loop is thought to guide specific substrates (depending on activity) towards the active-site. To further affirm BlBglH enzyme activity, molecular docking and molecular dynamics simulations were performed. Docking was carried out using 6-phospho-β-glucosidase enzyme-activity positive (p-Nitrophenyl-beta-D-glucoside-6-phosphate) and negative (p-Nitrophenyl-beta-D-galactoside-6-phosphate) control ligands, followed by 400 ns molecular dynamics simulations. The positive-control ligand maintained favourable interactions within the active site until the end of the simulation. The negative-control ligand was observed exiting the enzyme at 287 ns. Binding free energy calculations showed that the positive-control complex had a substantially more favourable binding energy compared to the negative-control complex. Jointly, the findings of this study suggest that the BlBglH enzyme possesses 6-phospho-β-glucosidase enzymatic activity.Keywords: 6-P-β-glucosidase, glycoside hydrolase 1, molecular dynamics, sequence analysis, substrate specificity
Procedia PDF Downloads 13195 Barriers and Facilitators of Community Based Mental Health Intervention (CMHI) in Rural Bangladesh: Findings from a Descriptive Study
Authors: Rubina Jahan, Mohammad Zayeed Bin Alam, Sazzad Chowdhury, Sadia Chowdhury
Abstract:
Access to mental health services in Bangladesh is a tale of urban privilege and rural struggle. Mental health services in the country are primarily centered in urban medical hospitals, with only 260 psychiatrists for a population of more than 162 million, while rural populations face far more severe and daunting challenges. In alignment with the World Health Organization's perspective on mental health as a basic human right and a crucial component for personal, community, and socioeconomic development; SAJIDA Foundation a value driven non-government organization in Bangladesh has introduced a Community Based Mental Health (CMHI) program to fill critical gaps in mental health care, providing accessible and affordable community-based services to protect and promote mental health, offering support for those grappling with mental health conditions. The CMHI programme is being implemented in 3 districts in Bangladesh, 2 of them are remote and most climate vulnerable areas targeting total 6,797 individual. The intervention plan involves a screening of all participants using a 10-point vulnerability assessment tool to identify vulnerable individuals. The assumption underlying this is that individuals assessed as vulnerable is primarily due to biological, psychological, social and economic factors and they are at an increased risk of developing common mental health issues. Those identified as vulnerable with high risk and emergency conditions will receive Mental Health First Aid (MHFA) and undergo further screening with GHQ-12 to be identified as cases and non-cases. The identified cases are then referred to community lay counsellors with basic training and knowledge in providing 4-6 sessions on problem solving or behavior activation. In situations where no improvement occurs post lay counselling or for individuals with severe mental health conditions, a referral process will be initiated, directing individuals to ensure appropriate mental health care. In our presentation, it will present the findings from 6-month pilot implementation focusing on the community-based screening versus outcome of the lay counseling session and barriers and facilitators of implementing community based mental health care in a resource constraint country like Bangladesh.Keywords: community-based mental health, lay counseling, rural bangladesh, treatment gap
Procedia PDF Downloads 4494 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English
Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista
Abstract:
The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.Keywords: corpus linguistics, historical linguistics, old English, parallel corpus
Procedia PDF Downloads 21393 Developing a Decision-Making Tool for Prioritizing Green Building Initiatives
Authors: Tayyab Ahmad, Gerard Healey
Abstract:
Sustainability in built environment sector is subject to many development constraints. Building projects are developed under different requirements of deliverables which makes each project unique. For an owner organization, i.e., a higher-education institution, involved in a significant building stock, it is important to prioritize some of the sustainability initiatives over the others in order to align the sustainable building development with organizational goals. The point-based green building rating tools i.e. Green Star, LEED, BREEAM are becoming increasingly popular and are well-acknowledged worldwide for verifying a sustainable development. It is imperative to synthesize a multi-criteria decision-making tool that can capitalize on the point-based methodology of rating systems while customizing the sustainable development of building projects according to the individual requirements and constraints of the client organization. A multi-criteria decision-making tool for the University of Melbourne is developed that builds on the action-learning and experience of implementing Green Buildings at the University of Melbourne. The tool evaluates the different sustainable building initiatives based on the framework of Green Star rating tool of Green Building Council of Australia. For each different sustainability initiative the decision-making tool makes an assessment based on at least five performance criteria including the ease with which a sustainability initiative can be achieved and the potential of a sustainability initiative to enhance project objectives, reduce life-cycle costs, enhance University’s reputation, and increase the confidence in quality construction. The use of a weighted aggregation mathematical model in the proposed tool can have a considerable role in the decision-making process of a Green Building project by indexing the Green Building initiatives in terms of organizational priorities. The index value of each initiative will be based on its alignment with some of the key performance criteria. The usefulness of the decision-making tool is validated by conducting structured interviews with some of the key stakeholders involved in the development of sustainable building projects at the University of Melbourne. The proposed tool is realized to help a client organization in deciding that within limited resources which sustainability initiatives and practices are more important to be pursued than others.Keywords: higher education institution, multi-criteria decision-making tool, organizational values, prioritizing sustainability initiatives, weighted aggregation model
Procedia PDF Downloads 23492 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 12691 Qualitative Analysis of User Experiences and Needs for Educational Chatbots in Higher Education
Authors: Felix Golla
Abstract:
In an era where technology increasingly intersects with education, the potential of chatbots and ChatGPT agents in enhancing student learning experiences in higher education is both significant and timely. This study explores the integration of these AI-driven tools in educational settings, emphasizing their design and functionality to meet the specific needs of students. Recognizing the gap in literature concerning student-centered AI applications in education, this research offers valuable insights into the role and efficacy of chatbots and ChatGPT agents as educational tools. Employing qualitative research methodologies, the study involved conducting semi-structured interviews with university students. These interviews were designed to gather in-depth insights into the students' experiences and expectations regarding the use of AI in learning environments. The High-Performance Cycle Model, renowned for its focus on goal setting and motivation, served as the theoretical framework guiding the analysis. This model helped in systematically categorizing and interpreting the data, revealing the nuanced perceptions and preferences of students regarding AI tools in education. The major findings of the study indicate a strong preference among students for chatbots and ChatGPT agents that offer personalized interaction, adaptive learning support, and regular, constructive feedback. These features were deemed essential for enhancing student engagement, motivation, and overall learning outcomes. Furthermore, the study revealed that students perceive these AI tools not just as passive sources of information but as active facilitators in the learning process, capable of adapting to individual learning styles and needs. In conclusion, this study underscores the transformative potential of chatbots and ChatGPT agents in higher education. It highlights the need for these AI tools to be designed with a student-centered approach, ensuring their alignment with educational objectives and student preferences. The findings contribute to the evolving discourse on AI in education, suggesting a paradigm shift towards more interactive, responsive, and personalized learning experiences. This research not only informs educators and technologists about the desirable features of educational chatbots but also opens avenues for future studies to explore the long-term impact of AI integration in academic curricula.Keywords: chatbot design in education, high-performance cycle model application, qualitative research in AI, student-centered learning technologies
Procedia PDF Downloads 7090 Ho-Doped Lithium Niobate Thin Films: Raman Spectroscopy, Structure and Luminescence
Authors: Edvard Kokanyan, Narine Babajanyan, Ninel Kokanyan, Marco Bazzan
Abstract:
Lithium niobate (LN) crystals, renowned for their exceptional nonlinear optical, electro-optical, piezoelectric, and photorefractive properties, stand as foundational materials in diverse fields of study and application. While they have long been utilized in frequency converters of laser radiation, electro-optical modulators, and holographic information recording media, LN crystals doped with rare earth ions represent a compelling frontier for modern compact devices. These materials exhibit immense potential as key components in infrared lasers, optical sensors, self-cooling systems, and radiation balanced laser setups. In this study, we present the successful synthesis of Ho-doped lithium niobate (LN:Ho) thin films on sapphire substrates employing the Sol-Gel technique. The films exhibit a strong crystallographic orientation along the perpendicular direction to the substrate surface, with X-ray diffraction analysis confirming the predominant alignment of the film's "c" axis, notably evidenced by the intense (006) reflection peak. Further characterization through Raman spectroscopy, employing a confocal Raman microscope (LabRAM HR Evolution) with exciting wavelengths of 532 nm and 785 nm, unraveled intriguing insights. Under excitation with a 785 nm laser, Raman scattering obeyed selection rules, while employing a 532 nm laser unveiled additional forbidden lines reminiscent of behaviors observed in bulk LN:Ho crystals. These supplementary lines were attributed to luminescence induced by excitation at 532 nm. Leveraging data from anti-Stokes Raman lines facilitated the disentanglement of luminescence spectra from the investigated samples. Surface scanning affirmed the uniformity of both structure and luminescence across the thin films. Notably, despite the robust orientation of the "c" axis perpendicular to the substrate surface, Raman signals indicated a stochastic distribution of "a" and "b" axes, validating the mosaic structure of the films along the mentioned axis. This study offers valuable insights into the structural properties of Ho-doped lithium niobate thin films, with the observed luminescence behavior holding significant promise for potential applications in optoelectronic devices.Keywords: lithium niobate, Sol-Gel, luminescence, Raman spectroscopy
Procedia PDF Downloads 6189 Understanding the Influence of Social Media on Individual’s Quality of Life Perceptions
Authors: Biljana Marković
Abstract:
Social networks are an integral part of our everyday lives, becoming an indispensable medium for communication in personal and business environments. New forms and ways of communication change the general mindset and significantly affect the quality of life of individuals. Quality of life is perceived as an abstract term, but often people are not aware that they directly affect the quality of their own lives, making minor but significant everyday choices and decisions. Quality of life can be defined broadly, but in the widest sense, it involves a subjective sense of satisfaction with one's life. Scientific knowledge about the impact of social networks on self-assessment of the quality of life of individuals is only just beginning to be researched. Available research indicates potential benefits as well as a number of disadvantages. In the context of the previous claims, the focus of the study conducted by the authors of this paper focuses on analyzing the impact of social networks on individual’s self-assessment of quality of life and the correlation between time spent on social networks, and the choice of content that individuals choose to share to present themselves. Moreover, it is aimed to explain how much and in what ways they critically judge the lives of others online. The research aspires to show the positive as well as negative aspects that social networks, primarily Facebook and Instagram, have on creating a picture of individuals and how they compare themselves with others. The topic of this paper is based on quantitative research conducted on a representative sample. An analysis of the results of the survey conducted online has elaborated a hypothesis which claims that content shared by individuals on social networks influences the image they create about themselves. A comparative analysis of the results obtained with the results of similar research has led to the conclusion about the synergistic influence of social networks on the feeling of the quality of life of respondents. The originality of this work is reflected in the approach of conducting research by examining attitudes about an individual's life satisfaction, the way he or she creates a picture of himself/herself through social networks, the extent to which he/she compares herself/himself with others, and what social media applications he/she uses. At the cognitive level, scientific contributions were made through the development of information concepts on quality of life, and at the methodological level through the development of an original methodology for qualitative alignment of respondents' attitudes using statistical analysis. Furthermore, at the practical level through the application of concepts in assessing the creation of self-image and the image of others through social networks.Keywords: quality of life, social media, self image, influence of social media
Procedia PDF Downloads 12888 Design Evaluation Tool for Small Wind Turbine Systems Based on the Simple Load Model
Authors: Jihane Bouabid
Abstract:
The urgency to transition towards sustainable energy sources has revealed itself imperative. Today, in the 21st Century, the intellectual society have imposed technological advancements and improvements, and anticipates expeditious outcomes as an integral component of its relentless pursuit of an elevated standard of living. As a part of empowering human development, driving economic growth and meeting social needs, the access to energy services has become a necessity. As a part of these improvements, we are introducing the project "Mywindturbine" - an interactive web user interface for design and analysis in the field of wind energy, with a particular adherence to the IEC (International Electrotechnical Commission) standard 61400-2 "Wind turbines – Part 2: Design requirements for small wind turbines". Wind turbines play a pivotal role in Morocco's renewable energy strategy, leveraging the nation's abundant wind resources. The IEC 61400-2 standard ensures the safety and design integrity of small wind turbines deployed in Morocco, providing guidelines for performance and safety protocols. The conformity with this standard ensures turbine reliability, facilitates standards alignment, and accelerates the integration of wind energy into Morocco's energy landscape. The aim of the GUI (Graphical User Interface) for engineers and professionals from the field of wind energy systems who would like to design a small wind turbine system following the safety requirements of the international standards IEC 61400-2. The interface provides an easy way to analyze the structure of the turbine machine under normal and extreme load conditions based on the specific inputs provided by the user. The platform introduces an overview to sustainability and renewable energy, with a focus on wind turbines. It features a cross-examination of the input parameters provided from the user for the SLM (Simple Load Model) of small wind turbines, and results in an analysis according to the IEC 61400-2 standard. The analysis of the simple load model encompasses calculations for fatigue loads on blades and rotor shaft, yaw error load on blades, etc. for the small wind turbine performance. Through its structured framework and adherence to the IEC standard, "Mywindturbine" aims to empower professionals, engineers, and intellectuals with the knowledge and tools necessary to contribute towards a sustainable energy future.Keywords: small wind turbine, IEC 61400-2 standard, user interface., simple load model
Procedia PDF Downloads 6387 Metabolic Profiling in Breast Cancer Applying Micro-Sampling of Biological Fluids and Analysis by Gas Chromatography – Mass Spectrometry
Authors: Mónica P. Cala, Juan S. Carreño, Roland J.W. Meesters
Abstract:
Recently, collection of biological fluids on special filter papers has become a popular micro-sampling technique. Especially, the dried blood spot (DBS) micro-sampling technique has gained much attention and is momently applied in various life sciences reserach areas. As a result of this popularity, DBS are not only intensively competing with the venous blood sampling method but are at this moment widely applied in numerous bioanalytical assays. In particular, in the screening of inherited metabolic diseases, pharmacokinetic modeling and in therapeutic drug monitoring. Recently, microsampling techniques were also introduced in “omics” areas, whereunder metabolomics. For a metabolic profiling study we applied micro-sampling of biological fluids (blood and plasma) from healthy controls and from women with breast cancer. From blood samples, dried blood and plasma samples were prepared by spotting 8uL sample onto pre-cutted 5-mm paper disks followed by drying of the disks for 100 minutes. Dried disks were then extracted by 100 uL of methanol. From liquid blood and plasma samples 40 uL were deproteinized with methanol followed by centrifugation and collection of supernatants. Supernatants and extracts were evaporated until dryness by nitrogen gas and residues derivated by O-methyxyamine and MSTFA. As internal standard C17:0-methylester in heptane (10 ppm) was used. Deconvolution and alignment of and full scan (m/z 50-500) MS data were done by AMDIS and SpectConnect (http://spectconnect.mit.edu) software, respectively. Statistical Data analysis was done by Principal Component Analysis (PCA) using R software. The results obtained from our preliminary study indicate that the use of dried blood/plasma on paper disks could be a powerful new tool in metabolic profiling. Many of the metabolites observed in plasma (liquid/dried) were also positively identified in whole blood samples (liquid/dried). Whole blood could be a potential substitute matrix for plasma in Metabolomic profiling studies as well also micro-sampling techniques for the collection of samples in clinical studies. It was concluded that the separation of the different sample methodologies (liquid vs. dried) as observed by PCA was due to different sample treatment protocols applied. More experiments need to be done to confirm obtained observations as well also a more rigorous validation .of these micro-sampling techniques is needed. The novelty of our approach can be found in the application of different biological fluid micro-sampling techniques for metabolic profiling.Keywords: biofluids, breast cancer, metabolic profiling, micro-sampling
Procedia PDF Downloads 41286 Commercial Winding for Superconducting Cables and Magnets
Authors: Glenn Auld Knierim
Abstract:
Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable
Procedia PDF Downloads 14185 Educating through Design: Eco-Architecture as a Form of Public Awareness
Authors: Carmela Cucuzzella, Jean-Pierre Chupin
Abstract:
Eco-architecture today is being assessed and judged increasingly on the basis of its environmental performance and its dedication to urgent stakes of sustainability. Architects have responded to environmental imperatives in novel ways since the 1960s. In the last two decades, however, different forms of eco-architecture practices have emerged that seem to be as dedicated to the issues of sustainability, as to their ability to 'communicate' their ecological features. The hypothesis is that some contemporary eco-architecture has been developing a characteristic 'explanatory discourse', of which it is possible to identify in buildings around the world. Some eco-architecture practices do not simply demonstrate their alignment with pressing ecological issues, rather, these buildings seem to be also driven by the urgent need to explain their ‘greenness’. The design aims specifically to teach visitors of the eco-qualities. These types of architectural practices are referred to in this paper as eco-didactic. The aim of this paper is to identify and assess this distinctive form of environmental architecture practice that aims to teach. These buildings constitute an entirely new form of design practice that places eco-messages squarely in the public realm. These eco-messages appear to have a variety of purposes: (i) to raise awareness of unsustainable quotidian habits, (ii) to become means of behavioral change, (iii) to publicly announce their responsibility through the designed eco-features, or (iv) to engage the patrons of the building into some form of sustainable interaction. To do this, a comprehensive review of Canadian eco-architecture is conducted since 1998. Their potential eco-didactic aspects are analysed through a lens of three vectors: (1) cognitive visitor experience: between the desire to inform and the poetics of form (are parts of the design dedicated to inform the visitors of the environmental aspects?); (2) formal architectural qualities: between the visibility and the invisibility of environmental features (are these eco-features clearly visible by the visitors?); and (3) communicative method for delivering eco-message: this transmission of knowledge is accomplished somewhere between consensus and dissensus as a method for disseminating the eco-message (do visitors question the eco-features or are they accepted by visitors as features that are environmental?). These architectural forms distinguish themselves in their crossing of disciplines, specifically, architecture, environmental design, and art. They also differ from other architectural practices in terms of how they aim to mobilize different publics within various urban landscapes The diversity of such buildings, from how and what they aim to communicate, to the audience they wish to engage, are all key parameters to better understand their means of knowledge transfer. Cases from the major cities across Canada are analysed, aiming to illustrate this increasing worldwide phenomenon.Keywords: eco-architecture, public awareness, community engagement, didacticism, communication
Procedia PDF Downloads 12884 Anti-Gravity to Neo-Concretism: The Epodic Spaces of Non-Objective Art
Authors: Alexandra Kennedy
Abstract:
Making use of the notion of ‘epodic spaces’ this paper presents a reconsideration of non-objective art practices, proposing alternatives to established materialist, formalist, process-based conceptualist approaches to such work. In his Neo-Concrete Manifesto (1959) Ferreira Gullar (1930-2016) sought to create a distinction between various forms of non-objective art. He distinguished the ‘geometric’ arts of neoplasticism, constructivism, and suprematism – which he described as ‘dangerously acute rationalism’ – from other non-objective practices. These alternatives, he proposed, have an expressive potential lacking in the former and this formed the basis for their categorisation as neo-concrete. Gullar prioritized the phenomenological over the rational, with an emphasis on the role of the spectator (a key concept of minimalism). Gullar highlighted the central role of sensual experience, colour and the poetic in such work. In the early twentieth century, Russian Cosmism – an esoteric philosophical movement – was highly influential on Russian avant-garde artists and can account for suprematist artists’ interest in, and approach to, planar geometry and four-dimensional space as demonstrated in the abstract paintings of Kasimir Malevich (1879-1935). Nikolai Fyodorov (1823-1903) promoted the idea of anti-gravity and cosmic space as the field for artistic activity. The artist and writer Kuzma Petrov-Vodkin (1878-1939) wrote on the concept of Euclidean space, the overcoming of such rational conceptions of space and the breaking free from the gravitational field and the earth’s sphere. These imaginary spaces, which also invoke a bodily experience, present a poetic dimension to the work of the suprematists. It is a dimension that arguably aligns more with Gullar’s formulation of his neo-concrete rather than that of his alignment of Suprematism with rationalism. While found in experiments with planar geometry, the interest in forms suggestive of an experience of breaking free–both physically from the earth and conceptually from rational, mathematical space (in a pre-occupation with non-Euclidean space and anti-geometry) and in their engagement with the spatial properties of colour, Suprematism presents itself as imaginatively epodic. The paper discusses both historical and contemporary non-objective practices in this context, drawing attention to the manner in which the category of the non-objective is used to categorise art works which are, arguably, qualitatively different.Keywords: anti-gravity, neo-concrete, non-Euclidian geometry, non-objective painting
Procedia PDF Downloads 17983 Using Signature Assignments and Rubrics in Assessing Institutional Learning Outcomes and Student Learning
Authors: Leigh Ann Wilson, Melanie Borrego
Abstract:
The purpose of institutional learning outcomes (ILOs) is to assess what students across the university know and what they do not. The issue is gathering this information in a systematic and usable way. This presentation will explain how one institution has engineered this process for both student success and maximum faculty curriculum and course design input. At Brandman University, there are three levels of learning outcomes: course, program, and institutional. Institutional Learning Outcomes (ILOs) are mapped to specific courses. Faculty course developers write the signature assignments (SAs) in alignment with the Institutional Learning Outcomes for each course. These SAs use a specific rubric that is applied consistently by every section and every instructor. Each year, the 12-member General Education Team (GET), as a part of their work, conducts the calibration and assessment of the university-wide SAs and the related rubrics for one or two of the five ILOs. GET members, who are senior faculty and administrators who represent each of the university's schools, lead the calibration meetings. Specifically, calibration is a process designed to ensure the accuracy and reliability of evaluating signature assignments by working with peer faculty to interpret rubrics and compare scoring. These calibration meetings include the full time and adjunct faculty members who teach the course to ensure consensus on the application of the rubric. Each calibration session is chaired by a GET representative as well as the course custodian/contact where the ILO signature assignment resides. The overall calibration process GET follows includes multiple steps, such as: contacting and inviting relevant faculty members to participate; organizing and hosting calibration sessions; and reviewing and discussing at least 10 samples of student work from class sections during the previous academic year, for each applicable signature assignment. Conversely, the commitment for calibration teams consist of attending two virtual meetings lasting up to three hours in duration. The first meeting focuses on interpreting the rubric, and the second meeting involves comparing scores for sample work and sharing feedback about the rubric and assignment. Next, participants are expected to follow all directions provided and participate actively, and respond to scheduling requests and other emails within 72 hours. The virtual meetings are recorded for future institutional use. Adjunct faculty are paid a small stipend after participating in both calibration meetings. Full time faculty can use this work on their annual faculty report for "internal service" credit.Keywords: assessment, assurance of learning, course design, institutional learning outcomes, rubrics, signature assignments
Procedia PDF Downloads 28082 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications
Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon
Abstract:
The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.Keywords: analysis, automated fibre placement, high speed, splicing
Procedia PDF Downloads 15581 DNA Hypomethylating Agents Induced Histone Acetylation Changes in Leukemia
Authors: Sridhar A. Malkaram, Tamer E. Fandy
Abstract:
Purpose: 5-Azacytidine (5AC) and decitabine (DC) are DNA hypomethylating agents. We recently demonstrated that both drugs increase the enzymatic activity of the histone deacetylase enzyme SIRT6. Accordingly, we are comparing the changes H3K9 acetylation changes in the whole genome induced by both drugs using leukemia cells. Description of Methods & Materials: Mononuclear cells from the bone marrow of six de-identified naive acute myeloid leukemia (AML) patients were cultured with either 500 nM of DC or 5AC for 72 h followed by ChIP-Seq analysis using a ChIP-validated acetylated-H3K9 (H3K9ac) antibody. Chip-Seq libraries were prepared from treated and untreated cells using SMARTer ThruPLEX DNA- seq kit (Takara Bio, USA) according to the manufacturer’s instructions. Libraries were purified and size-selected with AMPure XP beads at 1:1 (v/v) ratio. All libraries were pooled prior to sequencing on an Illumina HiSeq 1500. The dual-indexed single-read Rapid Run was performed with 1x120 cycles at 5 pM final concentration of the library pool. Sequence reads with average Phred quality < 20, with length < 35bp, PCR duplicates, and those aligning to blacklisted regions of the genome were filtered out using Trim Galore v0.4.4 and cutadapt v1.18. Reads were aligned to the reference human genome (hg38) using Bowtie v2.3.4.1 in end-to-end alignment mode. H3K9ac enriched (peak) regions were identified using diffReps v1.55.4 software using input samples for background correction. The statistical significance of differential peak counts was assessed using a negative binomial test using all individuals as replicates. Data & Results: The data from the six patients showed significant (Padj<0.05) acetylation changes at 925 loci after 5AC treatment versus 182 loci after DC treatment. Both drugs induced H3K9 acetylation changes at different chromosomal regions, including promoters, coding exons, introns, and distal intergenic regions. Ten common genes showed H3K9 acetylation changes by both drugs. Approximately 84% of the genes showed an H3K9 acetylation decrease by 5AC versus 54% only by DC. Figures 1 and 2 show the heatmaps for the top 100 genes and the 99 genes showing H3K9 acetylation decrease after 5AC treatment and DC treatment, respectively. Conclusion: Despite the similarity in hypomethylating activity and chemical structure, the effect of both drugs on H3K9 acetylation change was significantly different. More changes in H3K9 acetylation were observed after 5 AC treatments compared to DC. The impact of these changes on gene expression and the clinical efficacy of these drugs requires further investigation.Keywords: DNA methylation, leukemia, decitabine, 5-Azacytidine, epigenetics
Procedia PDF Downloads 14980 Urban Waste Water Governance in South Africa: A Case Study of Stellenbosch
Authors: R. Malisa, E. Schwella, K. I. Theletsane
Abstract:
Due to climate change, population growth and rapid urbanization, the demand for water in South Africa is inevitably surpassing supply. To address similar challenges globally, there has been a paradigm shift from conventional urban waste water management “government” to a “governance” paradigm. From the governance paradigm, Integrated Urban Water Management (IUWM) principle emerged. This principle emphasizes efficient urban waste water treatment and production of high-quality recyclable effluent. In so doing mimicking natural water systems, in their processes of recycling water efficiently, and averting depletion of natural water resources. The objective of this study was to investigate drivers of shifting the current urban waste water management approach from a “government” paradigm towards “governance”. The study was conducted through Interactive Management soft systems research methodology which follows a qualitative research design. A case study methodology was employed, guided by realism research philosophy. Qualitative data gathered were analyzed through interpretative structural modelling using Concept Star for Professionals Decision-Making tools (CSPDM) version 3.64. The constructed model deduced that the main drivers in shifting the Stellenbosch municipal urban waste water management towards IUWM “governance” principles are mainly social elements characterized by overambitious expectations of the public on municipal water service delivery, mis-interpretation of the constitution on access to adequate clean water and sanitation as a human right and perceptions on recycling water by different communities. Inadequate public participation also emerged as a strong driver. However, disruptive events such as draught may play a positive role in raising an awareness on the value of water, resulting in a shift on the perceptions on recycled water. Once the social elements are addressed, the alignment of governance and administration elements towards IUWM are achievable. Hence, the point of departure for the desired paradigm shift is the change of water service authorities and serviced communities’ perceptions and behaviors towards shifting urban waste water management approaches from “government” to “governance” paradigm.Keywords: integrated urban water management, urban water system, wastewater governance, wastewater treatment works
Procedia PDF Downloads 15979 Progressive Damage Analysis of Mechanically Connected Composites
Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan
Abstract:
While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values , and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.Keywords: puck, finite element, bolted joint, composite
Procedia PDF Downloads 103