Search results for: original metaphor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1616

Search results for: original metaphor

266 Creativity and Innovation in Postgraduate Supervision

Authors: Rajendra Chetty

Abstract:

The paper aims to address two aspects of postgraduate studies: interdisciplinary research and creative models of supervision. Interdisciplinary research can be viewed as a key imperative to solve complex problems. While excellent research requires a context of disciplinary strength, the cutting edge is often found at the intersection between disciplines. Interdisciplinary research foregrounds a team approach and information, methodologies, designs, and theories from different disciplines are integrated to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline. Our aim should also be to generate research that transcends the original disciplines i.e. transdisciplinary research. Complexity is characteristic of the knowledge economy, hence, postgraduate research and engaged scholarship should be viewed by universities as primary vehicles through which knowledge can be generated to have a meaningful impact on society. There are far too many ‘ordinary’ studies that fall into the realm of credentialism and certification as opposed to significant studies that generate new knowledge and provide a trajectory for further academic discourse. Secondly, the paper will look at models of supervision that are different to the dominant ‘apprentice’ or individual approach. A reflective practitioner approach would be used to discuss a range of supervision models that resonate well with the principles of interdisciplinarity, growth in the postgraduate sector and a commitment to engaged scholarship. The global demand for postgraduate education has resulted in increased intake and new demands to limited supervision capacity at institutions. Team supervision lodged within large-scale research projects, working with a cohort of students within a research theme, the journal article route of doctoral studies and the professional PhD are some of the models that provide an alternative to the traditional approach. International cooperation should be encouraged in the production of high-impact research and institutions should be committed to stimulating international linkages which would result in co-supervision and mobility of postgraduate students and global significance of postgraduate research. International linkages are also valuable in increasing the capacity for supervision at new and developing universities. Innovative co-supervision and joint-degree options with global partners should be explored within strategic planning for innovative postgraduate programmes. Co-supervision of PhD students is probably the strongest driver (besides funding) for collaborative research as it provides the glue of shared interest, advantage and commitment between supervisors. The students’ field serves and informs the co-supervisors own research agendas and helps to shape over-arching research themes through shared research findings.

Keywords: interdisciplinarity, internationalisation, postgraduate, supervision

Procedia PDF Downloads 238
265 The Correlation between Emotional Intelligence and Locus of Control: Empirical Study on Lithuanian Youth

Authors: Dalia Antiniene, Rosita Lekaviciene

Abstract:

The qualitative methodology based study is designed to reveal a connection between emotional intelligence (EI) and locus of control (LC) within the population of Lithuanian youth. In the context of emotional problems, the locus of control reflects how one estimates the causes of his/her emotions: internals (internal locus of control) associate their emotions with their manner of thinking, whereas externals (external locus of control) consider emotions to be evoked by external circumstances. On the other hand, there is little empirical data about this connection, and the results in disposition are often contradictory. In the conducted study 1430 young people, aged 17 to 27, from various regions of Lithuania were surveyed. The subjects were selected by quota sampling, maintaining natural proportions of the general Lithuanian youth population. To assess emotional intelligence the EI-DARL test (i.e. self-report questionnaire consisting of 75 items) was implemented. The emotional intelligence test, created applying exploratory factor analysis, reveals four main dimensions of EI: understanding of one’s own emotions, regulation of one’s own emotions, understanding other’s emotions, and regulation of other’s emotions (subscale reliability coefficients fluctuate between 0,84 and 0,91). An original 16-item internality/externality scale was used to examine the locus of control (internal consistency of the Externality subscale - 0,75; Internality subscale - 0,65). The study has determined that the youth understands and regulates other people’s emotions better than their own. Using the K-mean cluster analysis method, it was established that there are three groups of subjects according to their EI level – people with low, medium and high EI. After comparing means of subjects’ favorability of statements on the Internality/Externality scale, a predominance of internal locus of control in the young population was established. The multiple regression models has shown that a rather strong statistically significant correlation exists between total EI, EI subscales and LC. People who tend to attribute responsibility for the outcome of their actions to their own abilities and efforts have higher EI and, conversely, the tendency to attribute responsibility to external forces is related more with lower EI. While pursuing their goals, young people with high internality have a predisposition to analyze perceived emotions and, therefore, gain emotional experience: they learn to control their natural reactions and to act adequately in a situation at hand. Thus the study unfolds, that a person’s locus of control and emotional intelligence are related phenomena and allows us to draw a conclusion, that a person’s internality/externality is a reliable predictor of total EI and its components.

Keywords: emotional intelligence, externality, internality, locus of control

Procedia PDF Downloads 224
264 The Quantum Theory of Music and Human Languages

Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: language, music, sciences, quantum entenglement

Procedia PDF Downloads 78
263 Exploratory Study of Individual User Characteristics That Predict Attraction to Computer-Mediated Social Support Platforms and Mental Health Apps

Authors: Rachel Cherner

Abstract:

Introduction: The current study investigates several user characteristics that may predict the adoption of digital mental health supports. The extent to which individual characteristics predict preferences for functional elements of computer-mediated social support (CMSS) platforms and mental health (MH) apps is relatively unstudied. Aims: The present study seeks to illuminate the relationship between broad user characteristics and perceived attraction to CMSS platforms and MH apps. Methods: Participants (n=353) were recruited using convenience sampling methods (i.e., digital flyers, email distribution, and online survey forums). The sample was 68% male, and 32% female, with a mean age of 29. Participant racial and ethnic breakdown was 75% White, 7%, 5% Asian, and 5% Black or African American. Participants were asked to complete a 25-minute self-report questionnaire that included empirically validated measures assessing a battery of characteristics (i.e., subjective levels of anxiety/depression via PHQ-9 (Patient Health Questionnaire 9-item) and GAD-7 (Generalized Anxiety Disorder 7-item); attachment style via MAQ (Measure of Attachment Qualities); personality types via TIPI (The 10-Item Personality Inventory); growth mindset and mental health-seeking attitudes via GM (Growth Mindset Scale) and MHSAS (Mental Help Seeking Attitudes Scale)) and subsequent attitudes toward CMSS platforms and MH apps. Results: A stepwise linear regression was used to test if user characteristics significantly predicted attitudes towards key features of CMSS platforms and MH apps. The overall regression was statistically significant (R² =.20, F(1,344)=14.49, p<.000). Conclusion: This original study examines the clinical and sociocultural factors influencing decisions to use CMSS platforms and MH apps. Findings provide valuable insight for increasing adoption and engagement with digital mental health support. Fostering a growth mindset may be a method of increasing participant/patient engagement. In addition, CMSS platforms and MH apps may empower under-resourced and minority groups to gain basic access to mental health support. We do not assume this final model contains the best predictors of use; this is merely a preliminary step toward understanding the psychology and attitudes of CMSS platform/MH app users.

Keywords: computer-mediated social support platforms, digital mental health, growth mindset, health-seeking attitudes, mental health apps, user characteristics

Procedia PDF Downloads 92
262 Towards Creative Movie Title Generation Using Deep Neural Models

Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie

Abstract:

Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.

Keywords: creativity, deep machine learning, natural language generation, movies

Procedia PDF Downloads 327
261 Developing Curricula for Signaling and Communication Course at Malaysia Railway Academy (MyRA) through Industrial Collaboration Program

Authors: Mohd Fairus Humar, Ibrahim Sulaiman, Pedro Cruz, Hasry Harun

Abstract:

This paper presents the propose knowledge transfer program on railway signaling and communication by Original Equipment Manufacturer (OEM) Thales Portugal. The fundamental issue is that there is no rail related course offered by local universities and colleges in Malaysia which could be an option to pursue student career path. Currently, dedicated trainings related to the rail technology are provided by in-house training academies established by the respective rail operators such as Malaysia Railway Academy (MyRA) and Rapid Rail Training Centre. In this matter, the content of training and facilities need to be strengthened to keep up-to-date with the dynamic evolvement of the rail technology. This is because rail products have evolved to be more sophisticated and embedded with high technology components which no longer exist in the mechanical form alone but combined with electronics, information technology and others. These demand for a workforce imbued with knowledge, multi-skills and competency to deal with specialized technical areas. Talent is needed to support sustainability in Southeast Asia. Keeping the above factors in mind, an Industrial Collaboration Program (ICP) was carried out to transfer knowledge on curricula of railway signaling and communication to a selected railway operators and tertiary educational institution in Malaysia. In order to achieve the aim, a partnership was formed between Technical Depository Agency (TDA), Thales Portugal and MyRA for two years with three main stages of program implementation comprising of: i) training on basic railway signaling and communication for 1 month with Thales in Malaysia; ii) training on advance railway signaling and communication for 4 months with Thales in Portugal and; iii) a series of workshop. Two workshops were convened to develop and harmonize curricula of railway signaling and communication course and were followed by one training for installation equipment of railway signaling and Controlled Train Centre (CTC) system from Thales Portugal. With active involvement from Technical Depository Agency (TDA), railway operators, universities, and colleges, in planning, executing, monitoring, control and closure, the program module of railway signaling and communication course with a lab railway signaling field equipment and CTC simulator were developed. Through this program, contributions from various parties help to build committed societies to engage important issues in relation to railway signaling and communication towards creating a sustainable future.

Keywords: knowledge transfer program, railway signaling and communication, curricula, module and teaching aid simulator

Procedia PDF Downloads 193
260 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 125
259 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions

Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams

Abstract:

The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.

Keywords: architecture, central pavilions, classicism, machine learning

Procedia PDF Downloads 141
258 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 127
257 Functional Neurocognitive Imaging (fNCI): A Diagnostic Tool for Assessing Concussion Neuromarker Abnormalities and Treating Post-Concussion Syndrome in Mild Traumatic Brain Injury Patients

Authors: Parker Murray, Marci Johnson, Tyson S. Burnham, Alina K. Fong, Mark D. Allen, Bruce McIff

Abstract:

Purpose: Pathological dysregulation of Neurovascular Coupling (NVC) caused by mild traumatic brain injury (mTBI) is the predominant source of chronic post-concussion syndrome (PCS) symptomology. fNCI has the ability to localize dysregulation in NVC by measuring blood-oxygen-level-dependent (BOLD) signaling during the performance of fMRI-adapted neuropsychological evaluations. With fNCI, 57 brain areas consistently affected by concussion were identified as PCS neural markers, which were validated on large samples of concussion patients and healthy controls. These neuromarkers provide the basis for a computation of PCS severity which is referred to as the Severity Index Score (SIS). The SIS has proven valuable in making pre-treatment decisions, monitoring treatment efficiency, and assessing long-term stability of outcomes. Methods and Materials: After being scanned while performing various cognitive tasks, 476 concussed patients received an SIS score based on the neural dysregulation of the 57 previously identified brain regions. These scans provide an objective measurement of attentional, subcortical, visual processing, language processing, and executive functioning abilities, which were used as biomarkers for post-concussive neural dysregulation. Initial SIS scores were used to develop individualized therapy incorporating cognitive, occupational, and neuromuscular modalities. These scores were also used to establish pre-treatment benchmarks and measure post-treatment improvement. Results: Changes in SIS were calculated in percent change from pre- to post-treatment. Patients showed a mean improvement of 76.5 percent (σ= 23.3), and 75.7 percent of patients showed at least 60 percent improvement. Longitudinal reassessment of 24 of the patients, measured an average of 7.6 months post-treatment, shows that SIS improvement is maintained and improved, with an average of 90.6 percent improvement from their original scan. Conclusions: fNCI provides a reliable measurement of NVC allowing for identification of concussion pathology. Additionally, fNCI derived SIS scores direct tailored therapy to restore NVC, subsequently resolving chronic PCS resulting from mTBI.

Keywords: concussion, functional magnetic resonance imaging (fMRI), neurovascular coupling (NVC), post-concussion syndrome (PCS)

Procedia PDF Downloads 359
256 Creep Analysis and Rupture Evaluation of High Temperature Materials

Authors: Yuexi Xiong, Jingwu He

Abstract:

The structural components in an energy facility such as steam turbine machines are operated under high stress and elevated temperature in an endured time period and thus the creep deformation and creep rupture failure are important issues that need to be addressed in the design of such components. There are numerous creep models being used for creep analysis that have both advantages and disadvantages in terms of accuracy and efficiency. The Isochronous Creep Analysis is one of the simplified approaches in which a full-time dependent creep analysis is avoided and instead an elastic-plastic analysis is conducted at each time point. This approach has been established based on the rupture dependent creep equations using the well-known Larson-Miller parameter. In this paper, some fundamental aspects of creep deformation and the rupture dependent creep models are reviewed and the analysis procedures using isochronous creep curves are discussed. Four rupture failure criteria are examined from creep fundamental perspectives including criteria of Stress Damage, Strain Damage, Strain Rate Damage, and Strain Capability. The accuracy of these criteria in predicting creep life is discussed and applications of the creep analysis procedures and failure predictions of simple models will be presented. In addition, a new failure criterion is proposed to improve the accuracy and effectiveness of the existing criteria. Comparisons are made between the existing criteria and the new one using several examples materials. Both strain increase and stress relaxation form a full picture of the creep behaviour of a material under high temperature in an endured time period. It is important to bear this in mind when dealing with creep problems. Accordingly there are two sets of rupture dependent creep equations. While the rupture strength vs LMP equation shows how the rupture time depends on the stress level under load controlled condition, the strain rate vs rupture time equation reflects how the rupture time behaves under strain-controlled condition. Among the four existing failure criteria for rupture life predictions, the Stress Damage and Strain Damage Criteria provide the most conservative and non-conservative predictions, respectively. The Strain Rate and Strain Capability Criteria provide predictions in between that are believed to be more accurate because the strain rate and strain capability are more determined quantities than stress to reflect the creep rupture behaviour. A modified Strain Capability Criterion is proposed making use of the two sets of creep equations and therefore is considered to be more accurate than the original Strain Capability Criterion.

Keywords: creep analysis, high temperature mateials, rapture evalution, steam turbine machines

Procedia PDF Downloads 291
255 Application of Nuclear Magnetic Resonance (1H-NMR) in the Analysis of Catalytic Aquathermolysis: Colombian Heavy Oil Case

Authors: Paola Leon, Hugo Garcia, Adan Leon, Samuel Munoz

Abstract:

The enhanced oil recovery by steam injection was considered a process that only generated physical recovery mechanisms. However, there is evidence of the occurrence of a series of chemical reactions, which are called aquathermolysis, which generates hydrogen sulfide, carbon dioxide, methane, and lower molecular weight hydrocarbons. These reactions can be favored by the addition of a catalyst during steam injection; in this way, it is possible to generate the original oil in situ upgrading through the production increase of molecules of lower molecular weight. This additional effect could increase the oil recovery factor and reduce costs in transport and refining stages. Therefore, this research has focused on the experimental evaluation of the catalytic aquathermolysis on a Colombian heavy oil with 12,8°API. The effects of three different catalysts, reaction time, and temperature were evaluated in a batch microreactor. The changes in the Colombian heavy oil were quantified through nuclear magnetic resonance 1H-NMR. The relaxation times interpretation and the absorption intensity allowed to identify the distribution of the functional groups in the base oil and upgraded oils. Additionally, the average number of aliphatic carbons in alkyl chains, the number of substituted rings, and the aromaticity factor were established as average structural parameters in order to simplify the samples' compositional analysis. The first experimental stage proved that each catalyst develops a different reaction mechanism. The aromaticity factor has an increasing order of the salts used: Mo > Fe > Ni. However, the upgraded oil obtained with iron naphthenate tends to form a higher content of mono-aromatic and lower content of poly-aromatic compounds. On the other hand, the results obtained from the second phase of experiments suggest that the upgraded oils have a smaller difference in the length of alkyl chains in the range of 240º to 270°C. This parameter has lower values at 300°C, which indicates that the alkylation or cleavage reactions of alkyl chains govern at higher reaction temperatures. The presence of condensation reactions is supported by the behavior of the aromaticity factor and the bridge carbons production between aromatic rings (RCH₂). Finally, it is observed that there is a greater dispersion in the aliphatic hydrogens, which indicates that the alkyl chains have a greater reactivity compared to the aromatic structures.

Keywords: catalyst, upgrading, aquathermolysis, steam

Procedia PDF Downloads 111
254 Exploring the Link between Hoarding Disorder and Trauma: A Scoping Review

Authors: Murray Anderson, Galina Freed, Karli Jahn

Abstract:

Trauma is increasingly recognized as an important construct that has health implications for those who struggle with various mental health issues. For those individuals who meet the criteria for a diagnosis of hoarding disorder (HD), many have experienced some form of trauma. Further, some of the therapeutic interventions for those with HD can further perpetuate or magnify the experience of trauma. Therefore, the aim of this scoping review is to identify and document the nature and extent of research evidence related to trauma as it connects with HD. This review was guided by the questions, ‘How can our understanding of the trauma cycle help us to better appreciate the experiences of individuals who hoard, and how will a trauma informed lens inform the interventions for hoarding disorder? A comprehensive literature search was performed to identify original studies that contained the words “hoarding” and “trauma.” PsychINFO”,''EBSCO host,” “CINAHL” and “PubMed” were searched between January 2005 and April 2021. Articles were screened by three reviewers. Data extracted included publication date, demographics, study design, type of analysis, and noted connections between hoarding and trauma. Of the 329 articles, all duplicates, articles on hoardings of animals, articles not in English, and those without full-text availability were removed. Five categories were found in the remaining 45 articles, including (a) traumatic and stressful life events; (b) the link between posttraumatic stress disorder, trauma, and hoarding; (c) the relationships between different comorbidities, trauma, and hoarding; (d) the lack of early emotional expression and other forms of parental deprivation; and (e) the role of attachment. Lastly, the literature explains how the links between hoarding and trauma are difficult to study due to the highly stigmatized identities with this population. The review provided strong support for the connections between the experience of trauma and HD. What is missing from the literature is the use of a trauma-informed lens to better account for the ways in which hoarding disorder is understood. Other missing pieces in the literature are the potential uses of a trauma-informed lens to enhance the therapeutic process, to understand and reduce treatment attrition, and to improve treatment outcomes. The application of a trauma informed lens could improve our understanding of effective interactions among clients, families, and communities and improve the education around hoarding-related matters. Exploring the connections between trauma and HD can improve therapeutic delivery and destigmatize the experience of dealing with clutter and hoarding concerns. This awareness can also provide health care professionals with both the language and skills to liberate them from a reductionist view on HD.

Keywords: hoarding, attachment, parental deprivation, trauma

Procedia PDF Downloads 125
253 Responses of Grain Yield, Anthocyanin and Antioxidant Capacity to Water Condition in Wetland and Upland Purple Rice Genotypes

Authors: Supaporn Yamuangmorn, Chanakan Prom-U-Thai

Abstract:

Wetland and upland purple rice are the two major types classified by its original ecotypes in Northern Thailand. Wetland rice is grown under flooded condition from transplanting until the mutuality, while upland rice is naturally grown under well-drained soil known as aerobic cultivations. Both ecotypes can be grown and adapted to the reverse systems but little is known on its responses of grain yield and qualities between the 2 ecotypes. This study evaluated responses of grain yield as well as anthocyanin and antioxidant capacity between the wetland and upland purple rice genotypes grown in the submerged and aerobic conditions. A factorial arrangement in a randomized complete block design (RCBD) with two factors of rice genotype and water condition were carried out in three replications. The two wetland genotypes (Kum Doi Saket: KDK and Kum Phayao: KPY) and two upland genotypes (Kum Hom CMU: KHCMU and Pieisu1: PES1) were used in this study by growing under submerged and aerobic conditions. Grain yield was affected by the interaction between water condition and rice genotype. The wetland genotypes, KDK and KPY grown in the submerged condition produced about 2.7 and 0.8 times higher yield than in the aerobic condition, respectively. The 0.4 times higher grain yield of upland genotype (PES1) was found in the submerged condition than in the aerobic condition, but no significant differences in KHCMU. In the submerged condition, all genotypes produced higher yield components of tiller number, panicle number and percent filled grain than in the aerobic condition by 24% and 32% and 11%, respectively. The thousand grain weight and spikelet number were affected by water condition differently among genotypes. The wetland genotypes, KDK and KPY, and upland genotype, PES1, grown in the submerged condition produced about 19-22% higher grain weight than in the aerobic condition. The similar effect was found in spikelet number which the submerged condition of wetland genotypes, KDK and KPY, and the upland genotype, KHCMU, had about 28-30% higher than the aerobic condition. In contrast, the anthocyanin concentration and antioxidant capacity were affected by both the water condition and genotype. Rice grain grown in the aerobic condition had about 0.9 and 2.6 times higher anthocyanin concentration than in the submerged condition was found in the wetland rice, KDK and upland rice, KHCMU, respectively. Similarly, the antioxidant capacity of wetland rice, KDK and upland rice, KHCMU were 0.5 and 0.6 times higher in aerobic condition than in the submerged condition. There was a negative correlation between grain yield and anthocyanin concentration in wetland genotype KDK and upland genotype KHCMU, but it was not found in the other genotypes. This study indicating that some rice genotype can be adapted in the reverse ecosystem in both grain yield and quality, especially in the wetland genotype KPY and upland genotype PES1. To maximize grain yield and quality of purple rice, proper water management condition is require with a key consideration on difference responses among genotypes. Increasing number of rice genotypes in both ecotypes is needed to confirm their responses on water management.

Keywords: purple rice, water condition, anthocyanin, grain yield

Procedia PDF Downloads 161
252 Functionalization of Carbon-Coated Iron Nanoparticles with Fluorescent Protein

Authors: A. G. Pershina, P. S. Postnikov, M. E. Trusova, D. O. Burlakova, A. E. Sazonov

Abstract:

Invention of magnetic-fluorescent nanocomposites is a rapidly developing area of research. The magnetic-fluorescent nanocomposite attractiveness is connected with the ability of simultaneous management and control of such nanocomposites by two independent methods based on different physical principles. These nanocomposites are applied for the solution of various essential scientific and experimental biomedical problems. The aim of this research is development of principle approach to nanobiohybrid structures with magnetic and fluorescent properties design. The surface of carbon-coated iron nanoparticles (Fe@C) were covalently modified by 4-carboxy benzenediazonium tosylate. Recombinant fluorescent protein TagGFP2 (Eurogen) was obtained in E. coli (Rosetta DE3) by standard laboratory techniques. Immobilization of TagGFP2 on the nanoparticles surface was provided by the carbodiimide activation. The amount of COOH-groups on the nanoparticle surface was estimated by elemental analysis (Elementar Vario Macro) and TGA-analysis (SDT Q600, TA Instruments. Obtained nanocomposites were analyzed by FTIR spectroscopy (Nicolet Thermo 5700) and fluorescence microscopy (AxioImager M1, Carl Zeiss). Amount of the protein immobilized on the modified nanoparticle surface was determined by fluorimetry (Cary Eclipse) and spectrophotometry (Unico 2800) with the help of preliminary obtained calibration plots. In the FTIR spectra of modified nanoparticles the adsorption band of –COOH group around 1700 cm-1 and bands in the region of 450-850 cm-1 caused by bending vibrations of benzene ring were observed. The calculated quantity of active groups on the surface was equal to 0,1 mmol/g of material. The carbodiimide activation of COOH-groups on nanoparticles surface results to covalent immobilization of TagGFP2 fluorescent protein (0.2 nmol/mg). The success of immobilization was proved by FTIR spectroscopy. Protein characteristic adsorption bands in the region of 1500-1600 cm-1 (amide I) were presented in the FTIR spectrum of nanocomposite. The fluorescence microscopy analysis shows that Fe@C-TagGFP2 nanocomposite possesses fluorescence properties. This fact confirms that TagGFP2 protein retains its conformation due to immobilization on nanoparticles surface. Magnetic-fluorescent nanocomposite was obtained as a result of unique design solution implementation – the fluorescent protein molecules were fixed to the surface of superparamagnetic carbon-coated iron nanoparticles using original diazonium salts.

Keywords: carbon-coated iron nanoparticles, diazonium salts, fluorescent protein, immobilization

Procedia PDF Downloads 343
251 Basotho Cultural Shift: The Role of Dress in the Shift

Authors: Papali Elizabeth Maqalika

Abstract:

Introduction: Dress is used daily and can be used to define culture, and through it, individuals form a sense of self and identity. One of the characteristics of culture is that it evolves; Basotho culture is no exception to this. It has evolved through rites of entry, significant ceremonies, daily living, and an approach to others. Most of these affect and have been affected by the local/traditional dress. The study focused on the evolution of culture, and the role played by dress as it is one of the major contributors to non-verbal communication. Methodology: Secondary data were used since most of the original cultural practices are no longer held dear in the value system and so no longer practiced. Interviews were conducted to get some insights from the senior citizens and their responses compared to those of the present adults. Content analysis was used for the interview data. Results: The nature of governance in Lesotho has clearly contributed to the current cultural state of confusion. The Basotho culture has indeed shifted, and the difference in dress code explains it. Acculturation, the alteration in environments, and the type of occasions Basotho attended lately contributed to the shift. Technology brought about a difference in the mode of transport, sports, household activities, and gender roles. Conclusion and Recommendations: It was concluded that since culture is imparted through socialisation, a change in availability of most Basotho women leaves little time left for socialisation with children and resorts to other upbringing patterns, most of which are not cultural; this has brought a cultural shift. In addition, acculturation has contributed massively to the value system of Basotho. The type of dress worn by Basotho presently shifts the culture, and the shifting culture also shifts the dress required to suit the present culture. Because of the type of mindset Basotho has now, it is recommended that cultural days be observed in schools, including the multi-racial ones, and media should assist in this information transmission. The campaigns regarding the value of traditional dress and what it represents are recommended. The local dressmakers manufacturing the Seshoeshoe and any other traditional dress need to be educated about the fabric history, fiber content, and consequent care to be in a position to guide ultimate consumers of the products. Awareness campaigns that the culture shifts and may not necessarily result in negative should be ventured. Cultural exhibitions should also be held ideally at places that hold some cultural heritage. The ministry of sports and culture, together with that of tourism, should run with cultural awareness and enriching vision with a focus on education as opposed to revenue collection.

Keywords: Basotho, culture, dress, acculturation, influence, cultural heritage, socialization, non-verbal communication, Seshoeshoe

Procedia PDF Downloads 80
250 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival

Procedia PDF Downloads 341
249 Targeting and Developing the Remaining Pay in an Ageing Field: The Ovhor Field Experience

Authors: Christian Ihwiwhu, Nnamdi Obioha, Udeme John, Edward Bobade, Oghenerunor Bekibele, Adedeji Awujoola, Ibi-Ada Itotoi

Abstract:

Understanding the complexity in the distribution of hydrocarbon in a simple structure with flow baffles and connectivity issues is critical in targeting and developing the remaining pay in a mature asset. Subtle facies changes (heterogeneity) can have a drastic impact on reservoir fluids movement, and this can be crucial to identifying sweet spots in mature fields. This study aims to evaluate selected reservoirs in Ovhor Field, Niger Delta, Nigeria, with the objective of optimising production from the field by targeting undeveloped oil reserves, bypassed pay, and gaining an improved understanding of the selected reservoirs to increase the company’s reservoir limits. The task at the Ovhor field is complicated by poor stratigraphic seismic resolution over the field. 3-D geological (sedimentology and stratigraphy) interpretation, use of results from quantitative interpretation, and proper understanding of production data have been used in recognizing flow baffles and undeveloped compartments in the field. The full field 3-D model has been constructed in such a way as to capture heterogeneities and the various compartments in the field to aid the proper simulation of fluid flow in the field for future production prediction, proper history matching and design of good trajectories to adequately target undeveloped oil in the field. Reservoir property models (porosity, permeability, and net-to-gross) have been constructed by biasing log interpreted properties to a defined environment of deposition model whose interpretation captures the heterogeneities expected in the studied reservoirs. At least, two scenarios have been modelled for most of the studied reservoirs to capture the range of uncertainties we are dealing with. The total original oil in-place volume for the four reservoirs studied is 157 MMstb. The cumulative oil and gas production from the selected reservoirs are 67.64 MMstb and 9.76 Bscf respectively, with current production rate of about 7035 bopd and 4.38 MMscf/d (as at 31/08/2019). Dynamic simulation and production forecast on the 4 reservoirs gave an undeveloped reserve of about 3.82 MMstb from two (2) identified oil restoration activities. These activities include side-tracking and re-perforation of existing wells. This integrated approach led to the identification of bypassed oil in some areas of the selected reservoirs and an improved understanding of the studied reservoirs. New wells have/are being drilled now to test the results of our studies, and the results are very confirmatory and satisfying.

Keywords: facies, flow baffle, bypassed pay, heterogeneities, history matching, reservoir limit

Procedia PDF Downloads 129
248 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 103
247 Talking to Ex-Islamic State Fighters inside Iraqi Prisons: An Arab Woman’s Perspective on Radicalization and Deradicalization

Authors: Suha Hassen

Abstract:

This research aims to untangle the complexity of conducting face-to-face interviews with 80 ex-Islamic State fighters, encompassing three groups: local Iraqis, Arabs from the Middle East, and international fighters from around the globe. Each interview lasted approximately two hours and was conducted in both Arabic and English, focusing on the motivations behind joining the Islamic State and the pathways and mechanisms facilitating their involvement. The phenomenon of individuals joining violent Islamist extremist and jihadist organizations is multifaceted, drawing substantial attention within terrorism and security studies. Organizations such as the Islamic State, Hezbollah, Hamas, and Al-Qaeda pose formidable threats to international peace and stability, employing various terrorist tactics for radicalization and recruitment. However, significant gaps remain in current studies, including a lack of firsthand accounts, an inadequate understanding of original narratives (religious and linguistic) due to abstraction and misinterpretation of motivations, and a lack of Arab women's perspectives from the region. This study addresses these gaps by exploring the cultural, religious, and historical complexities that shape the narratives of ex-ISIS fighters. The paper will showcase three distinct cases: one French prisoner, one Moroccan fighter, and a local Iraqi, illustrating the diverse motivations and experiences that contribute to joining and leaving extremist groups. The findings provide valuable insights into the nuanced dynamics of radicalization, emphasizing the need for gender-sensitive approaches in counter-terrorism strategies and deradicalization programs. Importantly, this research has practical implications for counter-narrative policies and early-stage prevention of radicalization. By understanding the narratives used by ex-fighters, policymakers can develop targeted counter-narratives that disrupt recruitment efforts. Additionally, insights into the mechanisms of radicalization can inform early intervention programs, helping to identify and support at-risk individuals before they become entrenched in extremist ideologies. Ultimately, this research enhances our understanding of the individual experiences of ex-ISIS fighters and calls for a reevaluation of the narratives surrounding women’s roles in extremism and recovery.

Keywords: Arab women in extremism, counter-narrative policy, ex-ISIS fighters in Iraq, radicalization

Procedia PDF Downloads 25
246 Osseointegration Outcomes Following Amputee Lengthening

Authors: Jason Hoellwarth, Atiya Oomatia, Anuj Chavan, Kevin Tetsworth, Munjed Al Muderis

Abstract:

Introduction: Percutaneous EndoProsthetic Osseointegration for Limbs (PEPOL) facilitates improved quality of life (QOL) and objective mobility for most amputees discontent with their traditional socket prosthesis (TSP) experience. Some amputees desiring PEPOL have residual bone much shorter than the currently marketed press-fit implant lengths of 14-16 cm, potentially a risk for failure to integrate. We report on the techniques used, complications experienced, the management of those complications, and the overall mobility outcomes of seven patients who had femur distraction osteogenesis (DO) with a Freedom nail followed by PEPOL. Method: Retrospective evaluation of a prospectively maintained database identified nine patients (5 females) who had transfemoral DO in preparation for PEPOL with two years of follow-up after PEPOL. Six patients had traumatic causes of amputation, one had perinatal complications, one was performed to manage necrotizing fasciitis and one was performed as a result of osteosarcoma. Result: The average age at which DO commenced was 39.4±15.9 years, and seven patients had their amputation more than ten years prior (average 25.5±18.8 years). The residual femurs, on average, started at 102.2±39.7 mm and were lengthened 58.1±20.7 mm, 98±45% of the goal (99±161% of the original bone length). Five patients (56%) had a complication requiring additional surgery: four events of inadequate regeneration were managed with continued lengthening to the desired goal followed by autograft placement harvested from contralateral femur reaming; one patient had the cerclage wires break, which required operative replacement. All patients had osseointegration performed at 355±123 days after the initial lengthening nail surgery. One patient had K-level >2 before DO, at a mean of 3.4±0.6 (2.6-4.4) years following osseointegration. Six patients had K-level >2. The 6-Minute Walk Test remained unchanged (267±56 vs. 308 ± 117 meters). Patient self-rating of prosthesis function, problems, and amputee situation did not significantly change from before DO to after osseointegration. Six patients required additional surgery following osseointegration: six to remove fixation plates placed to maintain distraction osteogenesis length at osseointegration; two required irritation and debridement for infection. Conclusion: Extremely short residual femurs, which make TSP use troublesome, can be lengthened with externally controlled telescoping nails and successfully achieve osseointegration. However, it is imperative to counsel patients that additional surgery to address inadequate regeneration or to remove painful hardware used to maintain fixation may be necessary. This may improve the amputee’s expectations before beginning a potentially arduous process.

Keywords: osseointegration, limb lengthening, quality of life, amputation

Procedia PDF Downloads 70
245 Gray’s Anatomy for Students: First South Asia Edition Highlights

Authors: Raveendranath Veeramani, Sunil Jonathan Holla, Parkash Chand, Sunil Chumber

Abstract:

Gray’s Anatomy for Students has been a well-appreciated book among undergraduate students of anatomy in Asia. However, the current curricular requirements of anatomy require a more focused and organized approach. The editors of the first South Asia edition of Gray’s Anatomy for Students hereby highlight the modifications and importance of this edition. There is an emphasis on active learning by making the clinical relevance of anatomy explicit. Learning anatomy in context has been fostered by the association between anatomists and clinicians in keeping with the emerging integrated curriculum of the 21st century. The language has been simplified to aid students who have studied in the vernacular. The original illustrations have been retained, and few illustrations have been added. There are more figure numbers mentioned in the text to encourage students to refer to the illustrations while learning. The text has been made more student-friendly by adding generalizations, classifications and summaries. There are useful review materials at the beginning of the chapters which include digital resources for self-study. There are updates on imaging techniques to encourage students to appreciate the importance of essential knowledge of the relevant anatomy to interpret images, due emphasis has been laid on dissection. Additional importance has been given to the cranial nerves, by describing their relevant details with several additional illustrations and flowcharts. This new edition includes innovative features such as set inductions, outlines for subchapters and flowcharts to facilitate learning. Set inductions are mostly clinical scenarios to create interest in the need to study anatomy for healthcare professions. The outlines are a modern multimodal facilitating approach towards various topics to empower students to explore content and direct their learning and include learning objectives and material for review. The components of the outline encourage the student to be aware of the need to create solutions to clinical problems. The outlines help students direct their learning to recall facts, demonstrate and analyze relationships, use reason to explain concepts, appreciate the significance of structures and their relationships and apply anatomical knowledge. The 'structures to be identified in a dissection' are given as Level I, II and III which represent the 'must know, desirable to know and nice to know' content respectively. The flowcharts have been added to get an overview of the course of a structure, recapitulate important details about structures, and as an aid to recall. There has been a great effort to balance the need to have content that would enable students to understand concepts as well as get the basic material for the current condensed curriculum.

Keywords: Grays anatomy, South Asia, human anatomy, students anatomy

Procedia PDF Downloads 202
244 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer

Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu

Abstract:

Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.

Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature

Procedia PDF Downloads 216
243 Water Desalination by Membrane Distillation with MFI Zeolite Membranes

Authors: Angelo Garofalo, Laura Donato, Maria Concetta Carnevale, Enrico Drioli, Omar Alharbi, Saad Aljlil, Alessandra Criscuoli, Catia Algieri

Abstract:

Nowadays, water scarcity may be considered one of the most important and serious questions concerning our community: in fact, there is a remarkable mismatch between water supply and water demand. Exploitation of natural fresh water resources combined with higher water demand has led to an increased requirement for alternative water resources. In this context, desalination provides such an alternative source, offering water otherwise not accessible for irrigational, industrial and municipal use. Considering the various drawbacks of the polymeric membranes, zeolite membranes represent a potential device for water desalination owing to their high thermal and chemical stability. In this area wide attention was focused on the MFI (silicalite, ZSM-5) membranes, having a pore size lower (about 5.5 Å) than the major kinetic diameters of hydrated ions. In the present work, a scale-up for the preparation of supported silicalite membranes was performed. Therefore, tubular membranes 30 cm long were synthesized by using the secondary growth method coupled with the cross flow seeding procedure. The secondary growth presents two steps: seeding and growth of zeolite crystals on the support. This process, decoupling zeolite nucleation from crystals growth, permits to control the conditions of each step separately. The seeding procedure consists of a cross-flow filtration through a porous support coupled with the support rotation and tilting. The combination of these three different aspects allows a homogeneous and uniform coverage of the support with the zeolite seeds. After characterization by scanning electron microscope (SEM), X-ray diffractometry (XRD) and Energy-dispersive X-ray (EDX) analysis, the prepared membranes were tested by means of single gas permeation and then by Vacuum Membrane Distillation (VMD) using both deionized water and NaCl solutions. The experimental results evidenced the possibility to perform the scale up for the preparation of almost defect free silicalite membranes. VMD tests indicated the possibility to prepare membranes that exhibit interesting performance in terms of fluxes and salt rejections for concentrations from 0.2 M to 0.9 M. Furthermore, it was possible to restore the original performance of the membrane after an identified cleaning procedure. Acknowledgements: The authors gratefully acknowledge the support of the King Abdulaziz City for Science and Technology (KACST) for funding the research Project 895/33 entitled ‘Preparation and Characterization of Zeolite Membranes for Water Treatment’.

Keywords: desalination, MFI membranes, secondary growth, vacuum membrane distillation

Procedia PDF Downloads 256
242 Non-Perturbative Vacuum Polarization Effects in One- and Two-Dimensional Supercritical Dirac-Coulomb System

Authors: Andrey Davydov, Konstantin Sveshnikov, Yulia Voronina

Abstract:

There is now a lot of interest to the non-perturbative QED-effects, caused by diving of discrete levels into the negative continuum in the supercritical static or adiabatically slowly varying Coulomb fields, that are created by the localized extended sources with Z > Z_cr. Such effects have attracted a considerable amount of theoretical and experimental activity, since in 3+1 QED for Z > Z_cr,1 ≈ 170 a non-perturbative reconstruction of the vacuum state is predicted, which should be accompanied by a number of nontrivial effects, including the vacuum positron emission. Similar in essence effects should be expected also in both 2+1 D (planar graphene-based hetero-structures) and 1+1 D (one-dimensional ‘hydrogen ion’). This report is devoted to the study of such essentially non-perturbative vacuum effects for the supercritical Dirac-Coulomb systems in 1+1D and 2+1D, with the main attention drawn to the vacuum polarization energy. Although the most of works considers the vacuum charge density as the main polarization observable, vacuum energy turns out to be not less informative and in many respects complementary to the vacuum density. Moreover, the main non-perturbative effects, which appear in vacuum polarization for supercritical fields due to the levels diving into the lower continuum, show up in the behavior of vacuum energy even more clear, demonstrating explicitly their possible role in the supercritical region. Both in 1+1D and 2+1D, we explore firstly the renormalized vacuum density in the supercritical region using the Wichmann-Kroll method. Thereafter, taking into account the results for the vacuum density, we formulate the renormalization procedure for the vacuum energy. To evaluate the latter explicitly, an original technique, based on a special combination of analytical methods, computer algebra tools and numerical calculations, is applied. It is shown that, for a wide range of the external source parameters (the charge Z and size R), in the supercritical region the renormalized vacuum energy could significantly deviate from the perturbative quadratic growth up to pronouncedly decreasing behavior with jumps by (-2 x mc^2), which occur each time, when the next discrete level dives into the negative continuum. In the considered range of variation of Z and R, the vacuum energy behaves like ~ -Z^2/R in 1+1D and ~ -Z^3/R in 2+1D, exceeding deeply negative values. Such behavior confirms the assumption of the neutral vacuum transmutation into the charged one, and thereby of the spontaneous positron emission, accompanying the emergence of the next vacuum shell due to the total charge conservation. To the end, we also note that the methods, developed for the vacuum energy evaluation in 2+1 D, with minimal complements could be carried over to the three-dimensional case, where the vacuum energy is expected to be ~ -Z^4/R and so could be competitive with the classical electrostatic energy of the Coulomb source.

Keywords: non-perturbative QED-effects, one- and two-dimensional Dirac-Coulomb systems, supercritical fields, vacuum polarization

Procedia PDF Downloads 202
241 Optimization and Evaluation of Different Pathways to Produce Biofuel from Biomass

Authors: Xiang Zheng, Zhaoping Zhong

Abstract:

In this study, Aspen Plus was used to simulate the whole process of biomass conversion to liquid fuel in different ways, and the main results of material and energy flow were obtained. The process optimization and evaluation were carried out on the four routes of cellulosic biomass pyrolysis gasification low-carbon olefin synthesis olefin oligomerization, biomass water pyrolysis and polymerization to jet fuel, biomass fermentation to ethanol, and biomass pyrolysis to liquid fuel. The environmental impacts of three biomass species (poplar wood, corn stover, and rice husk) were compared by the gasification synthesis pathway. The global warming potential, acidification potential, and eutrophication potential of the three biomasses were the same as those of rice husk > poplar wood > corn stover. In terms of human health hazard potential and solid waste potential, the results were poplar > rice husk > corn stover. In the popular pathway, 100 kg of poplar biomass was input to obtain 11.9 kg of aviation coal fraction and 6.3 kg of gasoline fraction. The energy conversion rate of the system was 31.6% when the output product energy included only the aviation coal product. In the basic process of hydrothermal depolymerization process, 14.41 kg aviation kerosene was produced per 100 kg biomass. The energy conversion rate of the basic process was 33.09%, which can be increased to 38.47% after the optimal utilization of lignin gasification and steam reforming for hydrogen production. The total exergy efficiency of the system increased from 30.48% to 34.43% after optimization, and the exergy loss mainly came from the concentration of precursor dilute solution. Global warming potential in environmental impact is mostly affected by the production process. Poplar wood was used as raw material in the process of ethanol production from cellulosic biomass. The simulation results showed that 827.4 kg of pretreatment mixture, 450.6 kg of fermentation broth, and 24.8 kg of ethanol were produced per 100 kg of biomass. The power output of boiler combustion reached 94.1 MJ, the unit power consumption in the process was 174.9 MJ, and the energy conversion rate was 33.5%. The environmental impact was mainly concentrated in the production process and agricultural processes. On the basis of the original biomass pyrolysis to liquid fuel, the enzymatic hydrolysis lignin residue produced by cellulose fermentation to produce ethanol was used as the pyrolysis raw material, and the fermentation and pyrolysis processes were coupled. In the coupled process, 24.8 kg ethanol and 4.78 kg upgraded liquid fuel were produced per 100 kg biomass with an energy conversion rate of 35.13%.

Keywords: biomass conversion, biofuel, process optimization, life cycle assessment

Procedia PDF Downloads 70
240 The Rehabilitation of The Covered Bridge Leclerc (P-00249) Passing Over the Bouchard Stream in LaSarre, Quebec

Authors: Nairy Kechichian

Abstract:

The original Leclerc Bridge is a covered wooden bridge that is considered a Quebec heritage structure with an index of 60, making it a very important provincial bridge from a historical point of view. It was constructed in 1927 and is in the rural area of Abitibi-Temiscamingue. It is a “town Québécois” type of structure, which is generally rare but common for covered bridges in Abitibi-Temiscamingue. This type of structure is composed of two trusses on both sides formed with diagonals, internal bracings, uprights and top and bottom chords to allow the transmission of loads. This structure is mostly known for its solidity, lightweightness, and ease of construction. It is a single-span bridge with a length of 25.3 meters and allows the passage of one vehicle at a time with a 4.22-meter driving lane. The structure is composed of 2 trusses located at each end of the deck, two gabion foundations at both ends, uprights and top and bottom chords. WSP (Williams Sale Partnership) Canada inc. was mandated by the Transport Minister of Quebec in 2019 to increase the capacity of the bridge from 5 tons to 30.6 tons and rehabilitate it, as it has deteriorated quite significantly over the years. The bridge was damaged due to material deterioration over time, exposure to humidity, high load effects and insect infestation. To allow the passage of 3 axle trucks, as well as to keep the integrity of this heritage structure, the final design chosen to rehabilitate the bridge involved adding a new deck independent from the roof structure of the bridge. Essentially, new steel beams support the deck loads and the desired vehicle loads. The roof of the bridge is linked to the steel deck for lateral support, but it is isolated from the wooden deck. The roof is preserved for aesthetic reasons and remains intact as it is a heritage piece. Due to strict traffic management obstacles, an efficient construction method was put into place, which consisted of building a temporary bridge and moving the existing roof onto it to allow the circulation of vehicles on one side of the temporary bridge while providing a working space for the repairs of the roof on the other side to take place simultaneously. In parallel, this method allowed the demolition and reconstruction of the existing foundation, building a new steel deck, and transporting back the roof on the new bridge. One of the main criteria for the rehabilitation of the wooden bridge was to preserve, as much as possible, the existing patrimonial architectural design of the bridge. The project was completed successfully by the end of 2021.

Keywords: covered bridge, wood-steel, short span, town Québécois structure

Procedia PDF Downloads 67
239 A Preliminary End-Point Approach for Calculating Odorous Emissions in Life Cycle Assessment

Authors: G. M. Cappucci, C. Losi, P. Neri, M. Pini, A. M. Ferrari

Abstract:

Waste treatment and many production processes cause significant emissions of odors, thus typically leading to intense debate. The introduction of odorimetric units and their units of measurement, i.e., U.O. / m3, with the European regulation UE 13725 of 2003 designates the dynamic olfactometry as the official method for odorimetric analysis. Italy has filled the pre-existing legislative gap on the regulation of odorous emissions only recently, by introducing the Legislative Decree n°183 in 2017. The concentration of the odor to which a perceptive response occurs to 50% of the panel corresponds to the odorimetric unit of the sample under examination (1 U.O. / m3) and is equal to the threshold of perceptibility of the substance (O.T.). In particular, the treatment of Municipal Solid Waste (MSW) by Mechanical-Biological Treatment (MBT) plants produces odorous emissions, typically generated by aerobic procedures, potentially leading to significant environmental burdens. The quantification of odorous emissions represents a challenge within a LCA study since primary data are often missing. The aim of this study is to present the preliminary findings of an ongoing study whose aim is to identify and quantify odor emissions from the Tre Monti MBT plant, located in Imola (Bologna, Italy). Particularly, the issues faced with odor emissions in the present work are: i) the identification of the components of the gaseous mixture, whose total quantification in terms of odorimetric units is known, ii) the distribution of the total odorimetric units among the single substances identified and iii) the quantification of the mass emitted for each substance. The environmental analysis was carried out on the basis of the amount of emitted substance. The calculation method IMPact Assessment of Chemical Toxics (IMPACT) 2002+ has been modified since the original one does not take into account indoor emissions. Characterization factors were obtained by adopting a preliminary method in order to calculate indoor human effects. The impact and damage assessments were performed without the identification of new categories, thus in accordance with the categories of the selected calculation method. The results show that the damage associated to odorous emissions is the 0.24% of the total damage, and the most affected damage category is Human Health, mainly as a consequence of ammonia emission (86.06%). In conclusion, this preliminary approach allowed identifying and quantifying the substances responsible for the odour impact, in order to attribute them the relative damage on human health as well as ecosystem quality.

Keywords: life cycle assessment, municipal solid waste, odorous emissions, waste treatment

Procedia PDF Downloads 174
238 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 60
237 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification

Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi

Abstract:

Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.

Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix

Procedia PDF Downloads 138