Search results for: sensory integration procedure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5129

Search results for: sensory integration procedure

269 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life

Authors: Sandra Young

Abstract:

The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.

Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics

Procedia PDF Downloads 116
268 Peripheral Neuropathiy After Locoregional Anesthesia

Authors: Dalila Chaid, Yacine Houmel, Mohamed Lamine Belloulou

Abstract:

Peripheral neuropathy is a rare but worrying complication of peripheral local anaesthesia. It is caused either by needle contact with the nerve root or by the direct toxicity of local anaesthetics, leading to nerve damage, injury or irritation. Although uncommon, it remains a major concern for anaesthetists. The aim of the study was to assess the prevalence of nerve block-associated neuropathy in knee surgery and to identify the contributing factors in order to minimise the occurrence of this complication. The study also assessed the severity and evolution of lesions, as well as the factors leading to neuropathic pain. Methodology: It is a retrospective observational study on cases of neuropathy related to nerve blocks of the lower limb for knee surgery over a period of seven years (2016-2022). The study included a total of 6,000 patients Analyse the anaesthetic and neuropathic pain-related parameters received from these patients to determine the prevalence and severity of neuropathy. Findings: the prevalence of nerve block-related neuropathy in our study is 5.8‰ for the sciatic nerve and 0.9‰ for the femoral nerve. This was higher compared to the reported rates in the literature, which were between 0.0 to 5‰ for the Sciatic nerve and 0.0 to 3.4‰ for the femoral nerve. These findings highlight the importance of identifying and implementing an ideal anesthesia procedure to reduce the risk of neuropathy associated with nerve blocks. Theoretical Importance: The findings of this study contribute to the existing literature on peripheral neuropathy following locoregional anesthesia. By identifying the prevalence and severity of neuropathy related to nerve blocks, as well as the underlying factors, we provide valuable insights for anesthetists to improve patient safety. This study also emphasizes the need for compliance with technical safety rules to minimize the occurrence of neuropathy. Data Collection and Analysis Procedures: For this study, 25 clinics with retrospective data were collected of neuropathy associated with nerve blocks for knee surgery over a span of seven years. Parameters related to anaesthesia and neuropathic pain were analysed to determine prevalence,severity, and progression of neuropathy. Comparison of our results with the existing literature in order to assess their significance. Questions Addressed: This study aims to define the following points: 1. The prevalence of neuropathy associated with nerve blocks for knee surgery. 2. The factors underlying the development of neuropathy after nerve blocks. 3. Reducing the risk of neuropathy by complying with technical safety rules. 4. Assessing the severity and evolution of neuropathic pain in these cases. Conclusion: this study highlights the need for careful consideration and implementation of anesthesia procedures during nerve blocks for knee surgery. The prevalence of neuropathy linked to these blocks was higher compared to the literature, emphasizing the importance of identifying and minimizing contributing factors. Compliance with technical safety rules is crucial to reduce the risk of peripheral neuropathy. This study provides valuable insights to anesthetists and contributes to improving patient safety in the field of locoregional anesthesia.

Keywords: phantom limb, neuropathic pain, lower limb amputee, ultrasound-guided locoreginal anesthesia

Procedia PDF Downloads 43
267 Behavioral Analysis of Anomalies in Intertemporal Choices Through the Concept of Impatience and Customized Strategies for Four Behavioral Investor Profiles With an Application of the Analytic Hierarchy Process: A Case Study

Authors: Roberta Martino, Viviana Ventre

Abstract:

The Discounted Utility Model is the essential reference for calculating the utility of intertemporal prospects. According to this model, the value assigned to an outcome is the smaller the greater the distance between the moment in which the choice is made and the instant in which the outcome is perceived. This diminution determines the intertemporal preferences of the individual, the psychological significance of which is encapsulated in the discount rate. The classic model provides a discount rate of linear or exponential nature, necessary for temporally consistent preferences. Empirical evidence, however, has proven that individuals apply discount rates with a hyperbolic nature generating the phenomenon of intemporal inconsistency. What this means is that individuals have difficulty managing their money and future. Behavioral finance, which analyzes the investor's attitude through cognitive psychology, has made it possible to understand that beyond individual financial competence, there are factors that condition choices because they alter the decision-making process: behavioral bias. Since such cognitive biases are inevitable, to improve the quality of choices, research has focused on a personalized approach to strategies that combines behavioral finance with personality theory. From the considerations, it emerges the need to find a procedure to construct the personalized strategies that consider the personal characteristics of the client, such as age or gender, and his personality. The work is developed in three parts. The first part discusses and investigates the weight of the degree of impatience and impatience decrease in the anomalies of the discounted utility model. Specifically, the degree of decrease in impatience quantifies the impact that emotional factors generated by haste and financial market agitation have on decision making. The second part considers the relationship between decision making and personality theory. Specifically, four behavioral categories associated with four categories of behavioral investors are considered. This association allows us to interpret intertemporal choice as a combination of bias and temperament. The third part of the paper presents a method for constructing personalized strategies using Analytic Hierarchy Process. Briefly: the first level of the analytic hierarchy process considers the goal of the strategic plan; the second level considers the four temperaments; the third level compares the temperaments with the anomalies of the discounted utility model; and the fourth level contains the different possible alternatives to be selected. The weights of the hierarchy between level 2 and level 3 are constructed considering the degrees of decrease in impatience derived for each temperament with an experimental phase. The results obtained confirm the relationship between temperaments and anomalies through the degree of decrease in impatience and highlight that the actual impact of emotions in decision making. Moreover, it proposes an original and useful way to improve financial advice. Inclusion of additional levels in the Analytic Hierarchy Process can further improve strategic personalization.

Keywords: analytic hierarchy process, behavioral finance anomalies, intertemporal choice, personalized strategies

Procedia PDF Downloads 75
266 Photo-Fenton Degradation of Organic Compounds by Iron(II)-Embedded Composites

Authors: Marius Sebastian Secula, Andreea Vajda, Benoit Cagnon, Ioan Mamaliga

Abstract:

One of the most important classes of pollutants is represented by dyes. The synthetic character and complex molecular structure make them more stable and difficult to be biodegraded in water. The treatment of wastewaters containing dyes in order to separate/degrade dyes is of major importance. Various techniques have been employed to remove and/or degrade dyes in water. Advanced oxidation processes (AOPs) are known as among the most efficient ones towards dye degradation. The aim of this work is to investigate the efficiency of a cheap Iron-impregnated activated carbon Fenton-like catalyst in order to degrade organic compounds in aqueous solutions. In the presented study an anionic dye, Indigo Carmine, is considered as a model pollutant. Various AOPs are evaluated for the degradation of Indigo Carmine to establish the effect of the prepared catalyst. It was found that the Iron(II)-embedded activated carbon composite enhances significantly the degradation process of Indigo Carmine. Using the wet impregnation procedure, 5 g of L27 AC material were contacted with Fe(II) solutions of FeSO4 precursor at a theoretical iron content in the resulted composite of 1 %. The L27 AC was impregnated for 3h at 45°C, then filtered, washed several times with water and ethanol and dried at 55 °C for 24 h. Thermogravimetric analysis, Fourier transform infrared, X-ray diffraction, and transmission electron microscopy were employed to investigate the structural, textural, and micromorphology of the catalyst. Total iron content in the obtained composites and iron leakage were determined by spectrophotometric method using phenantroline. Photo-catalytic tests were performed using an UV - Consulting Peschl Laboratory Reactor System. UV light irradiation tests were carried out to determine the performance of the prepared Iron-impregnated composite towards the degradation of Indigo Carmine in aqueous solution using different conditions (17 W UV lamps, with and without in-situ generation of O3; different concentrations of H2O2, different initial concentrations of Indigo Carmine, different values of pH, different doses of NH4-OH enhancer). The photocatalytic tests were performed after the adsorption equilibrium has been established. The obtained results emphasize an enhancement of Indigo Carmine degradation in case of the heterogeneous photo-Fenton process conducted with an O3 generating UV lamp in the presence of hydrogen peroxide. The investigated process obeys the pseudo-first order kinetics. The photo-Fenton degradation of IC was tested at different values of initial concentration. The obtained results emphasize an enhancement of Indigo Carmine degradation in case of the heterogeneous photo-Fenton process conducted with an O3 generating UV lamp in the presence of hydrogen peroxide. Acknowledgments: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0405.

Keywords: photodegradation, heterogeneous Fenton, anionic dye, carbonaceous composite, screening factorial design

Procedia PDF Downloads 236
265 Direct Integration of 3D Ultrasound Scans with Patient Educational Mobile Application

Authors: Zafar Iqbal, Eugene Chan, Fareed Ahmed, Mohamed Jama, Avez Rizvi

Abstract:

Advancements in Ultrasound Technology have enabled machines to capture 3D and 4D images with intricate features of the growing fetus. Sonographers can now capture clear 3D images and 4D videos of the fetus, especially of the face. Fetal faces are often seen on the ultrasound scan of the third trimester where anatomical features become more defined. Parents often want 3D/4D images and videos of their ultrasounds, and particularly image that capture the child’s face. Sidra Medicine developed a patient education mobile app called 10 Moons to improve care and provide useful information during the length of their pregnancy. In addition to general information, we built the ability to send ultrasound images directly from the modality to the mobile application, allowing expectant mothers to easily store and share images of their baby. 10 Moons represent the length of the pregnancy on a lunar calendar, which has both cultural and religious significance in the Middle East. During the third trimester scan, sonographers can capture 3D pictures of the fetus. Ultrasound machines are connected with a local 10 Moons Server with a Digital Imaging and Communications in Medicine (DICOM) application running on it. Sonographers are able to send images directly to the DICOM server by a preprogrammed button on the ultrasound modality. Mothers can also request which pictures they would like to be available on the app. An internally built DICOM application receives the image and saves the patient information from DICOM header (for verification purpose). The application also anonymizes the image by removing all the DICOM header information and subsequently converts it into a lossless JPEG. Finally, and the application passes the image to the mobile application server. On the 10 Moons mobile app – patients enter their Medical Record Number (MRN) and Date of Birth (DOB) to receive a One Time Password (OTP) for security reasons to view the images. Patients can also share the images anonymized images with friends and family. Furthermore, patients can also request 3D printed mementos of their child through 10 Moons. 10 Moons is unique patient education and information application where expected mothers can also see 3D ultrasound images of their children. Sidra Medicine staff has the added benefit of a full content management administrative backend where updates to content can be made. The app is available on secure infrastructure with both local and public interfaces. The application is also available in both English and Arabic languages to facilitate most of the patients in the region. Innovation is at the heart of modern healthcare management. With Innovation being one of Sidra Medicine’s core values, our 10 Moons application provides expectant mothers with unique educational content as well as the ability to store and share images of their child and purchase 3D printed mementos.

Keywords: patient educational mobile application, ultrasound images, digital imaging and communications in medicine (DICOM), imaging informatics

Procedia PDF Downloads 108
264 An Approach to Addressing Homelessness in Hong Kong: Life Story Approach

Authors: Tak Mau Simon Chan, Ying Chuen Lance Chan

Abstract:

Homelessness has been a popular and controversial debate in Hong Kong, a city which is densely populated and well-known for very expensive housing. The constitution of the homeless as threats to the community and environmental hygiene is ambiguous and debatable in the Hong Kong context. The lack of an intervention model is the critical research gap thus far, aside from the tangible services delivered. The life story approach (LSA), with its unique humanistic orientation, has been well applied in recent decades to depict the needs of various target groups, but not the homeless. It is argued that the life story approach (LSA), which has been employed by health professionals in the landscape of dementia, and health and social care settings, can be used as a reference in the local Chinese context through indigenization. This study, therefore, captures the viewpoints of service providers and users by constructing an indigenous intervention model that refers to the LSA in serving the chronically homeless. By informing 13 social workers and 27 homeless individuals in 8 focus groups whilst 12 homeless individuals have participated in individual in-depth interviews, a framework of LSA in homeless people is proposed. Through thematic analysis, three main themes of their life stories was generated, namely, the family, negative experiences and identity transformation. The three domains solidified framework that not only can be applied to the homeless, but also other disadvantaged groups in the Chinese context. Based on the three domains of family, negative experiences and identity transformation, the model is applied in the daily practices of social workers who help the homeless. The domain of family encompasses familial relationships from the past to the present to the speculated future with ten sub-themes. The domain of negative experiences includes seven sub-themes, with reference to the deviant behavior committed. The last domain, identity transformation, incorporates the awareness and redefining of one’s identity and there are a total of seven sub-themes. The first two domains are important components of personal histories while the third is more of an unknown, exploratory and yet to-be-redefined territory which has a more positive and constructive orientation towards developing one’s identity and life meaning. The longitudinal temporal dimension of moving from the past – present - future enriches the meaning making process, facilitates the integration of life experiences and maintains a more hopeful dialogue. The model is tested and its effectiveness is measured by using qualitative and quantitative methods to affirm the extent that it is relevant to the local context. First, it contributes to providing a clear guideline for social workers who can use the approach as a reference source. Secondly, the framework acts as a new intervention means to address problem saturated stories and the intangible needs of the homeless. Thirdly, the model extends the application to beyond health related issues. Last but not least, the model is highly relevant to the local indigenous context.

Keywords: homeless, indigenous intervention, life story approach, social work practice

Procedia PDF Downloads 275
263 Nigerian Football System: Examining Meso-Level Practices against a Global Model for Integrated Development of Mass and Elite Sport

Authors: I. Derek Kaka’an, P. Smolianov, D. Koh Choon Lian, S. Dion, C. Schoen, J. Norberg

Abstract:

This study was designed to examine mass participation and elite football performance in Nigeria with reference to advance international football management practices. Over 200 sources of literature on sport delivery systems were analyzed to construct a globally applicable model of elite football integrated with mass participation, comprising of the following three levels: macro- (socio-economic, cultural, legislative, and organizational), meso- (infrastructures, personnel, and services enabling sport programs) and micro-level (operations, processes, and methodologies for development of individual athletes). The model has received scholarly validation and showed to be a framework for program analysis that is not culturally bound. The Smolianov and Zakus model has been employed for further understanding of sport systems such as US soccer, US Rugby, swimming, tennis, and volleyball as well as Russian and Dutch swimming. A questionnaire was developed using the above-mentioned model. Survey questions were validated by 12 experts including academicians, executives from sport governing bodies, football coaches, and administrators. To identify best practices and determine areas for improvement of football in Nigeria, 120 coaches completed the questionnaire. Useful exemplars and possible improvements were further identified through semi-structured discussions with 10 Nigerian football administrators and experts. Finally, content analysis of Nigeria Football Federation’s website and organizational documentation was conducted. This paper focuses on the meso-level of Nigerian football delivery, particularly infrastructures, personnel, and services enabling sport programs. This includes training centers, competition systems, and intellectual services. Results identified remarkable achievements coupled with great potential to further develop football in different types of public and private organizations in Nigeria. These include: assimilating football competitions with other cultural and educational activities, providing favorable conditions for employees of all possible organizations to partake and help in managing football programs and events, providing football coaching integrated with counseling for prevention of antisocial conduct, and improving cooperation between football programs and organizations for peace-making and advancement of international relations, tourism, and socio-economic development. Accurate reporting of the sports programs from the media should be encouraged through staff training for better awareness of various events. The systematic integration of these meso-level practices into the balanced development of mass and high-performance football will contribute to international sport success as well as national health, education, and social harmony.

Keywords: football, high performance, mass participation, Nigeria, sport development

Procedia PDF Downloads 228
262 Developing Computational Thinking in Early Childhood Education

Authors: Kalliopi Kanaki, Michael Kalogiannakis

Abstract:

Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.

Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses

Procedia PDF Downloads 106
261 The Language of Landscape Architecture

Authors: Hosna Pourhashemi

Abstract:

Chahar Bagh, the symbol of the world, displayed around the pool of life in the centre, attempts to emulate Eden. It represents a duality concept based on the division of the universe into two perceptional insights, a celestial and an earthly one. Chahar Bagh garden pattern refers to the Garden of Eden, that was watered and framed by main four rivers. This microcosm is combined with a mystical love of flowers, sweet-scented trees, the variety of colors, and the sense of eternal life. This symbol of the integration of spontaneous expressivity of the natural elements and reasoning awareness of man strives for the ideal of divine perfection. Through collecting and analyzing the data, the prevalence and continuous influence of Chahar Bagh concept on selected historical gardens was elaborated and evaluated. After the conquest of Persia by the Arabs in the 7th century, Chahar Bagh was adopted and spread throughout the Islamic expansion, from the Middle East, westward across northern Africa to Morocco and the Iberian Peninsula, and eastward through Iran to Central Asia and India. Furthermore, its continuity to the mid of 16th century Renaissance period is shown. By adapting the semiotic theory of Peirce and Saussure on the Persian garden, Chahar Bagh was defined as the basic pattern language for the garden culture. The hypothesis of the continuous influence of Chahar Bagh pattern language on today’s landscape architecture was examined on selected works of Dieter Kienast, as the important and relevant protagonist of his time (end of twentieth ct.) and up to our time. Chahar Bagh pattern language offers collective cultural sensitive healing wisdom transmitted down through the millennia. Through my reflections in Dieter Kienast’s works, I transformed my personal experience into a transpersonal understanding regarding the Sufi philosophy and the Jung psychology, which brings me to define new design theories and methods to form a spiritual, as I call it” Quaternary Perception Model” for landscape architecture. Based on a cognition process through self-journeying in this holistic model, human consciousness can be developed to access to “higher” levels of being and embrace the unity. The self-purification and mindfulness through transpersonal confrontation in the ”Quaternary Perception Model” generates a form of heart-based treatment. I adapted the seven spiritual levels of Sufi self-development on the perception of landscape, representing the stages of the self, ranging from absolutely self-centered to pure spiritual humanity. I redefine and reread the elements and features of Chahar Bagh pattern language for today’s landscape architecture. The “lost paradise” lies in our heart and can be perceived by all humans in landscapes and cities designed in the spirit of” Quaternary Model”.

Keywords: persian garden, pattern language of Chahar Bagh, wholistic Perception, dieter kienast, “quaternary model”

Procedia PDF Downloads 63
260 Kansei Engineering Applied to the Design of Rural Primary Education Classrooms: Design-Based Learning Case

Authors: Jimena Alarcon, Andrea Llorens, Gabriel Hernandez, Maritza Palma, Lucia Navarrete

Abstract:

The research has funding from the Government of Chile and is focused on defining the design of rural primary classroom that stimulates creativity. The relevance of the study consists of its capacity to define adequate educational spaces for the implementation of the design-based learning (DBL) methodology. This methodology promotes creativity and teamwork, generating a meaningful learning experience for students, based on the appreciation of their environment and the generation of projects that contribute positively to their communities; also, is an inquiry-based form of learning that is based on the integration of design thinking and the design process into the classroom. The main goal of the study is to define the design characteristics of rural primary school classrooms, associated with the implementation of the DBL methodology. Along with the change in learning strategies, it is necessary to change the educational spaces in which they develop. The hypothesis indicates that a change in the space and equipment of the classrooms based on the emotions of the students will motivate better learning results based on the implementation of a new methodology. In this case, the pedagogical dynamics require an important interaction between the participants, as well as an environment favorable to creativity. Methodologies from Kansei engineering are used to know the emotional variables associated with their definition. The study is done to 50 students between 6 and 10 years old (average age of seven years), 48% of men and 52% women. Virtual three-dimensional scale models and semantic differential tables are used. To define the semantic differential, self-applied surveys were carried out. Each survey consists of eight separate questions in two groups: question A to find desirable emotions; question B related to emotions. Both questions have a maximum of three alternatives to answer. Data were tabulated with IBM SPSS Statistics version 19. Terms referred to emotions are grouped into twenty concepts with a higher presence in surveys. To select the values obtained as part of the implementation of Semantic Differential, a number expected of 'chi-square test (x2)' frequency calculated for classroom space is considered lower limit. All terms over the N expected a cut point, are included to prepare tables for surveys to find a relation between emotion and space. Statistic contrast (Chi-Square) represents significance level ≥ 0, indicator that frequencies appeared are not random. Then, the most representative terms depend on the variable under study: a) definition of textures and color of vertical surfaces is associated with emotions such as tranquility, attention, concentration, creativity; and, b) distribution of the equipment of the rooms, with emotions associated with happiness, distraction, creativity, freedom. The main findings are linked to the generation of classrooms according to diverse DBL team dynamics. Kansei engineering is the appropriate methodology to know the emotions that students want to feel in the classroom space.

Keywords: creativity, design-based learning, education spaces, emotions

Procedia PDF Downloads 131
259 Technology and the Need for Integration in Public Education

Authors: Eric Morettin

Abstract:

Cybersecurity and digital literacy are pressing issues among Canadian citizens, yet formal education does not provide today’s students with the necessary knowledge and skills needed to adapt to these challenging issues within the physical and digital labor-market. Canada’s current education systems do not highlight the importance of these respective fields, aside from using technology for learning management systems and alternative methods of assignment completion. Educators are not properly trained to integrate technology into the compulsory courses within public education, to better prepare their learners in these topics and Canada’s digital economy. ICTC addresses these gaps in education and training through cross-Canadian educational programming in digital literacy and competency, cybersecurity and coding which is bridged with Canada’s provincially regulated K-12 curriculum guidelines. After analyzing Canada’s provincial education, it is apparent that there are gaps in learning related to technology, as well as inconsistent educational outcomes that do not adequately represent the current Canadian and global economies. Presently only New Brunswick, Nova Scotia, Ontario, and British Columbia offer curriculum guidelines for cybersecurity, computer programming, and digital literacy. The remaining provinces do not address these skills in their curriculum guidelines. Moreover, certain courses across some provinces not being updated since the 1990’s. The three territories respectfully take curriculum strands from other provinces and use them as their foundation in education. Yukon uses all British Columbia curriculum. Northwest Territories and Nunavut respectfully use a hybrid of Alberta and Saskatchewan curriculum as their foundation of learning. Education that is provincially regulated does not allow for consistency across the country’s educational outcomes and what Canada’s students will achieve – especially when curriculum outcomes have not been updated to reflect present day society. Through this, ICTC has aligned Canada’s provincially regulated curriculum and created opportunities for focused education in the realm of technology to better serve Canada’s present learners and teachers; while addressing inequalities and applicability within curriculum strands and outcomes across the country. As a result, lessons, units, and formal assessment strategies, have been created to benefit students and teachers in this interdisciplinary, cross-curricular, practice - as well as meeting their compulsory education requirements and developing skills and literacy in cyber education. Teachers can access these lessons and units through ICTC’s website, as well as receive professional development regarding the assessment and implementation of these offerings from ICTC’s education coordinators, whose combines experience exceeds 50 years of teaching in public, private, international, and Indigenous schools. We encourage you to take this opportunity that will benefit students and educators, and will bridge the learning and curriculum gaps in Canadian education to better reflect the ever-changing public, social, and career landscape that all citizens are a part of. Students are the future, and we at ICTC strive to ensure their futures are bright and prosperous.

Keywords: cybersecurity, education, curriculum, teachers

Procedia PDF Downloads 60
258 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulations

Keywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow

Procedia PDF Downloads 475
257 Statistical Analysis to Compare between Smart City and Traditional Housing

Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh

Abstract:

Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and design

Keywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving

Procedia PDF Downloads 94
256 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 361
255 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters

Authors: Jyoti Sahu, Vinay A. Juvekar

Abstract:

Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.

Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature

Procedia PDF Downloads 366
254 Green Synthesis (Using Environment Friendly Bacteria) of Silver-Nanoparticles and Their Application as Drug Delivery Agents

Authors: Sutapa Mondal Roy, Suban K. Sahoo

Abstract:

The primary aim of this work is to synthesis silver nanoparticles (AgNPs) through environmentally benign routes to avoid any chemical toxicity related undesired side effects. The nanoparticles were stabilized with drug ciprofloxacin (Cp) and were studied for their effectiveness as drug delivery agent. Targeted drug delivery improves the therapeutic potential of drugs at the diseased site as well as lowers the overall dose and undesired side effects. The small size of nanoparticles greatly facilitates the transport of active agents (drugs) across biological membranes and allows them to pass through the smallest capillaries in the body that are 5-6 μm in diameter, and can minimize possible undesired side effects. AgNPs are non-toxic, inert, stable, and has a high binding capacity and thus can be considered as biomaterials. AgNPs were synthesized from the nutrient broth supernatant after the culture of environment-friendly bacteria Bacillus subtilis. The AgNPs were found to show the surface plasmon resonance (SPR) band at 425 nm. The Cp capped Ag nanoparticles formation was complete within 30 minutes, which was confirmed from absorbance spectroscopy. Physico-chemical nature of the AgNPs-Cp system was confirmed by Dynamic Light Scattering (DLS), Transmission Electron Microscopy (TEM) etc. The AgNPs-Cp system size was found to be in the range of 30-40 nm. To monitor the kinetics of drug release from the surface of nanoparticles, the release of Cp was carried out by careful dialysis keeping AgNPs-Cp system inside the dialysis bag at pH 7.4 over time. The drug release was almost complete after 30 hrs. During the drug delivery process, to understand the AgNPs-Cp system in a better way, the sincere theoretical investigation is been performed employing Density Functional Theory. Electronic charge transfer, electron density, binding energy as well as thermodynamic properties like enthalpy, entropy, Gibbs free energy etc. has been predicted. The electronic and thermodynamic properties, governed by the AgNPs-Cp interactions, indicate that the formation of AgNPs-Cp system is exothermic i.e. thermodynamically favorable process. The binding energy and charge transfer analysis implies the optimum stability of the AgNPs-Cp system. Thus, the synthesized Cp-Ag nanoparticles can be effectively used for biological purposes due to its environmentally benign routes of synthesis procedures, which is clean, biocompatible, non-toxic, safe, cost-effective, sustainable and eco-friendly. The Cp-AgNPs as biomaterials can be successfully used for drug delivery procedures due to slow release of drug from nanoparticles over a considerable period of time. The kinetics of the drug release show that this drug-nanoparticle assembly can be effectively used as potential tools for therapeutic applications. The ease of synthetic procedure, lack of possible chemical toxicity and their biological activity along with excellent application as drug delivery agent will open up vista of using nanoparticles as effective and successful drug delivery agent to be used in modern days.

Keywords: silver nanoparticles, ciprofloxacin, density functional theory, drug delivery

Procedia PDF Downloads 365
253 Biomaterials Solutions to Medical Problems: A Technical Review

Authors: Ashish Thakur

Abstract:

This technical paper was written in view of focusing the biomaterials and its various applications in modern industries. Author tires to elaborate not only the medical, infect plenty of application in other industries. The scope of the research area covers the wide range of physical, biological and chemical sciences that underpin the design of biomaterials and the clinical disciplines in which they are used. A biomaterial is now defined as a substance that has been engineered to take a form which, alone or as part of a complex system, is used to direct, by control of interactions with components of living systems, the course of any therapeutic or diagnostic procedure. Biomaterials are invariably in contact with living tissues. Thus, interactions between the surface of a synthetic material and biological environment must be well understood. This paper reviews the benefits and challenges associated with surface modification of the metals in biomedical applications. The paper also elaborates how the surface characteristics of metallic biomaterials, such as surface chemistry, topography, surface charge, and wettability, influence the protein adsorption and subsequent cell behavior in terms of adhesion, proliferation, and differentiation at the biomaterial–tissue interface. The chapter also highlights various techniques required for surface modification and coating of metallic biomaterials, including physicochemical and biochemical surface treatments and calcium phosphate and oxide coatings. In this review, the attention is focused on the biomaterial-associated infections, from which the need for anti-infective biomaterials originates. Biomaterial-associated infections differ markedly for epidemiology, aetiology and severity, depending mainly on the anatomic site, on the time of biomaterial application, and on the depth of the tissues harbouring the prosthesis. Here, the diversity and complexity of the different scenarios where medical devices are currently utilised are explored, providing an overview of the emblematic applicative fields and of the requirements for anti-infective biomaterials. In addition to this, chapter introduces nanomedicine and the use of both natural and synthetic polymeric biomaterials, focuses on specific current polymeric nanomedicine applications and research, and concludes with the challenges of nanomedicine research. Infection is currently regarded as the most severe and devastating complication associated to the use of biomaterials. Osteoporosis is a worldwide disease with a very high prevalence in humans older than 50. The main clinical consequences are bone fractures, which often lead to patient disability or even death. A number of commercial biomaterials are currently used to treat osteoporotic bone fractures, but most of these have not been specifically designed for that purpose. Many drug- or cell-loaded biomaterials have been proposed in research laboratories, but very few have received approval for commercial use. Polymeric nanomaterial-based therapeutics plays a key role in the field of medicine in treatment areas such as drug delivery, tissue engineering, cancer, diabetes, and neurodegenerative diseases. Advantages in the use of polymers over other materials for nanomedicine include increased functionality, design flexibility, improved processability, and, in some cases, biocompatibility.

Keywords: nanomedicine, tissue, infections, biomaterials

Procedia PDF Downloads 241
252 Numerical Analyses of Dynamics of Deployment of PW-Sat2 Deorbit Sail Compared with Results of Experiment under Micro-Gravity and Low Pressure Conditions

Authors: P. Brunne, K. Ciechowska, K. Gajc, K. Gawin, M. Gawin, M. Kania, J. Kindracki, Z. Kusznierewicz, D. Pączkowska, F. Perczyński, K. Pilarski, D. Rafało, E. Ryszawa, M. Sobiecki, I. Uwarowa

Abstract:

Big amount of space debris constitutes nowadays a real thread for operating space crafts; therefore the main purpose of PW-Sat2’ team was to create a system that could help cleanse the Earth’s orbit after each small satellites’ mission. After 4 years of development, the motorless, low energy consumption and low weight system has been created. During series of tests, the system has shown high reliable efficiency. The PW-Sat2’s deorbit system is a square-shaped sail which covers an area of 4m². The sail surface is made of 6 μm aluminized Mylar film which is stretched across 4 diagonally placed arms, each consisting of two C-shaped flat springs and enveloped in Mylar sleeves. The sail is coiled using a special, custom designed folding stand that provides automation and repeatability of the sail unwinding tests and placed in a container with inner diameter of 85 mm. In the final configuration the deorbit system weights ca. 600 g and occupies 0.6U (in accordance with CubeSat standard). The sail’s releasing system requires minimal amount of power based on thermal knife that burns out the Dyneema wire, which holds the system before deployment. The Sail is being pushed out of the container within a safe distance (20 cm away) from the satellite. The energy for the deployment is completely assured by coiled C-shaped flat springs, which during the release, unfold the sail surface. To avoid dynamic effects on the satellite’s structure, there is the rotational link between the sail and satellite’s main body. To obtain complete knowledge about complex dynamics of the deployment, a number of experiments have been performed in varied environments. The numerical model of the dynamics of the Sail’s deployment has been built and is still under continuous development. Currently, the integration of the flight model and Deorbit Sail is performed. The launch is scheduled for February 2018. At the same time, in cooperation with United Nations Office for Outer Space Affairs, sail models and requested facilities are being prepared for the sail deployment experiment under micro-gravity and low pressure conditions at Bremen Drop Tower, Germany. Results of those tests will provide an ultimate and wide knowledge about deployment in space environment to which system will be exposed during its mission. Outcomes of the numerical model and tests will be compared afterwards and will help the team in building a reliable and correct model of a very complex phenomenon of deployment of 4 c-shaped flat springs with surface attached. The verified model could be used inter alia to investigate if the PW-Sat2’s sail is scalable and how far is it possible to go with enlarging when creating systems for bigger satellites.

Keywords: cubesat, deorbitation, sail, space, debris

Procedia PDF Downloads 271
251 Effective Affordable Housing Finance in Developing Economies: An Integration of Demand and Supply Solutions

Authors: Timothy Akinwande, Eddie Hui, Karien Dekker

Abstract:

Housing the urban poor remains a persistent challenge, despite evident research attention over many years. It is, therefore, pertinent to investigate affordable housing provision challenges with novel approaches. For innovative solutions to affordable housing constraints, it is apposite to thoroughly examine housing solutions vis a vis the key elements of the housing supply value chain (HSVC), which are housing finance, housing construction and land acquisition. A pragmatic analysis will examine affordable housing solutions from demand and supply perspectives to arrive at consolidated solutions from bilateral viewpoints. This study thoroughly examined informal housing finance strategies of the urban poor and diligently investigated expert opinion on affordable housing finance solutions. The research questions were: (1) What mutual grounds exist between informal housing finance solutions of the urban poor and housing expert solutions to affordable housing finance constraints in developing economies? (2) What are effective approaches to affordable housing finance in developing economies from an integrated demand - supply perspective? Semi-structured interviews were conducted in the 5 largest slums of Lagos, Nigeria, with 40 informal settlers for demand-oriented solutions, while focus group discussion and in-depth interviews were conducted with 12 housing experts in Nigeria for supply-oriented solutions. Following a rigorous thematic, content and descriptive analyses of data using NVivo and Excel, findings ascertained mutual solutions from both demand and supply standpoints that can be consolidated into more effective affordable housing finance solutions in Nigeria. Deliberate finance models that recognise and include the finance realities of the urban poor was found to be the most significant supply-side housing finance solution, representing 25.4% of total expert responses. Findings also show that 100% of sampled urban poor engage in vocations where they earn little irregular income or zero income, limiting their housing finance capacities and creditworthiness. Survey revealed that the urban poor are involved in community savings and employ microfinance institutions within the informal settlements to tackle their housing finance predicaments. These are informal finance models of the urban poor, revealing common grounds between demand and supply solutions for affordable housing financing. Effective, affordable housing approach will be to modify, institutionalise and incorporate the informal finance strategies of the urban poor into deliberate government policies. This consolidation of solutions from demand and supply perspectives can eliminate the persistent misalliance between affordable housing demand and affordable housing supply. This study provides insights into mutual housing solutions from demand and supply perspectives, and findings are informative for effective, affordable housing provision approaches in developing countries. This study is novel in consolidating affordable housing solutions from demand and supply viewpoints, especially in relation to housing finance as a key component of HSVC. The framework for effective, affordable housing finance in developing economies from a consolidated viewpoint generated in this study is significant for the achievement of sustainable development goals, especially goal 11 for sustainable, resilient and inclusive cities. Findings are vital for future housing studies.

Keywords: affordable housing, affordable housing finance, developing economies, effective affordable housing, housing policy, urban poor, sustainable development goal, sustainable affordable housing

Procedia PDF Downloads 49
250 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence

Authors: Sogand Barghi

Abstract:

The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.

Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting

Procedia PDF Downloads 44
249 Gauging Floral Resources for Pollinators Using High Resolution Drone Imagery

Authors: Nicholas Anderson, Steven Petersen, Tom Bates, Val Anderson

Abstract:

Under the multiple-use management regime established in the United States for federally owned lands, government agencies have come under pressure from commercial apiaries to grant permits for the summer pasturing of honeybees on government lands. Federal agencies have struggled to integrate honeybees into their management plans and have little information to make regulations that resolve how many colonies should be allowed in a single location and at what distance sets of hives should be placed. Many conservation groups have voiced their concerns regarding the introduction of honeybees to these natural lands, as they may outcompete and displace native pollinating species. Assessing the quality of an area in regard to its floral resources, pollen, and nectar can be important when attempting to create regulations for the integration of commercial honeybee operations into a native ecosystem. Areas with greater floral resources may be able to support larger numbers of honeybee colonies, while poorer resource areas may be less resilient to introduced disturbances. Attempts are made in this study to determine flower cover using high resolution drone imagery to help assess the floral resource availability to pollinators in high elevation, tall forb communities. This knowledge will help in determining the potential that different areas may have for honeybee pasturing and honey production. Roughly 700 images were captured at 23m above ground level using a drone equipped with a Sony QX1 RGB 20-megapixel camera. These images were stitched together using Pix4D, resulting in a 60m diameter high-resolution mosaic of a tall forb meadow. Using the program ENVI, a supervised maximum likelihood classification was conducted to calculate the percentage of total flower cover and flower cover by color (blue, white, and yellow). A complete vegetation inventory was taken on site, and the major flowers contributing to each color class were noted. An accuracy assessment was performed on the classification yielding an 89% overall accuracy and a Kappa Statistic of 0.855. With this level of accuracy, drones provide an affordable and time efficient method for the assessment of floral cover in large areas. The proximal step of this project will now be to determine the average pollen and nectar loads carried by each flower species. The addition of this knowledge will result in a quantifiable method of measuring pollen and nectar resources of entire landscapes. This information will not only help land managers determine stocking rates for honeybees on public lands but also has applications in the agricultural setting, aiding producers in the determination of the number of honeybee colonies necessary for proper pollination of fruit and nut crops.

Keywords: honeybee, flower, pollinator, remote sensing

Procedia PDF Downloads 112
248 Relative Expression and Detection of MUB Adhesion Domains and Plantaricin-Like Bacteriocin among Probiotic Lactobacillus plantarum-Group Strains Isolated from Fermented Foods

Authors: Sundru Manjulata Devi, Prakash M. Halami

Abstract:

The immemorial use of fermented foods from vegetables, dairy and other biological sources are of great demand in India because of their health benefits. However, the diversity of Lactobacillus plantarum group (LPG) of vegetable origin has not been revealed yet, particularly with reference to their probiotic functionalities. In the present study, the different species of probiotic Lactobacillus plantarum group (LPG) i.e., L. plantarum subsp. plantarum MTCC 5422 (from fermented cereals), L. plantarum subsp. argentoratensis FG16 (from fermented bamboo shoot) and L. paraplantarum MTCC 9483 (from fermented gundruk) (as characterized by multiplex recA PCR assay) were considered to investigate their relative expression of MUB domains of mub gene (mucin binding protein) by Real time PCR. Initially, the allelic variation in the mub gene was assessed and found to encode three different variants (Type I, II and III). All the three types had 8, 9 and 10 MUB domains respectively (as analysed by Pfam database) and were found to be responsible for adhesion of bacteria to the host intestinal epithelial cells. These domains either get inserted or deleted during speciation or evolutionary events and lead to divergence. The reverse transcriptase qPCR analysis with mubLPF1+R1 primer pair supported variation in amplicon sizes with 300, 500 and 700 bp among different LPG strains. The relative expression of these MUB domains significantly unregulated in the presence of 1% mucin in overnight grown cultures. Simultaneously, the mub gene expressed efficiently by 7 fold in the culture L. paraplantarum MTCC 9483 with 10 MUB domains. An increase in the expression levels for L. plantarum subsp. plantarum MTCC 5422 and L. plantarum subsp. argentoratensis FG16 (MCC 2974) with 9 and 8 repetitive domains was around 4 and 2 fold, respectively. The detection and expression of an integrase (int) gene in the upstream region of mub gene reveals the excision and integration of these repetitive domains. Concurrently, an in vitro adhesion assay to mucin and exclusion of pathogens (such as Listeria monocytogenes and Micrococcus leuteus) was investigated and observed that the L. paraplantarum MTCC 9483 with more adhesion domains has more ability to adhere to mucin and inhibited the growth of pathogens. The production and expression of plantaricin-like bacteriocin (plnNC8 type) in MTCC 9483 suggests the pathogen inhibition. Hence, the expression of MUB domains can act as potential biomarkers in the screening of a novel probiotic LPG strain with adherence property. The present study provides a platform for an easy, rapid, less time consuming, low-cost methodology for the detection of potential probiotic bacteria. It was known that the traditional practices followed in the preparation of fermented bamboo shoots/gundruk/cereals of Indian foods contain different kinds of neutraceuticals for functional food and novel compounds with health promoting factors. In future, a detailed study of these food products can add more nutritive value, consumption and suitable for commercialization.

Keywords: adhesion gene, fermented foods, MUB domains, probiotics

Procedia PDF Downloads 248
247 Bridging the Educational Gap: A Curriculum Framework for Mass Timber Construction Education and Comparative Analysis of Physical vs. Virtual Prototypes in Construction Management

Authors: Farnaz Jafari

Abstract:

The surge in mass timber construction represents a pivotal moment in sustainable building practices, yet the lack of comprehensive education in construction management poses a challenge in harnessing this innovation effectively. This research endeavors to bridge this gap by developing a curriculum framework integrating mass timber construction into undergraduate and industry certificate programs. To optimize learning outcomes, the study explores the impact of two prototype formats -Virtual Reality (VR) simulations and physical mock-ups- on students' understanding and skill development. The curriculum framework aims to equip future construction managers with a holistic understanding of mass timber, covering its unique properties, construction methods, building codes, and sustainable advantages. The study adopts a mixed-methods approach, commencing with a systematic literature review and leveraging surveys and interviews with educators and industry professionals to identify existing educational gaps. The iterative development process involves incorporating stakeholder feedback into the curriculum. The evaluation of prototype impact employs pre- and post-tests administered to participants engaged in pilot programs. Through qualitative content analysis and quantitative statistical methods, the study seeks to compare the effectiveness of VR simulations and physical mock-ups in conveying knowledge and skills related to mass timber construction. The anticipated findings will illuminate the strengths and weaknesses of each approach, providing insights for future curriculum development. The curriculum's expected contribution to sustainable construction education lies in its emphasis on practical application, bridging the gap between theoretical knowledge and hands-on skills. The research also seeks to establish a standard for mass timber construction education, contributing to the field through a unique comparative analysis of VR simulations and physical mock-ups. The study's significance extends to the development of best practices and evidence-based recommendations for integrating technology and hands-on experiences in construction education. By addressing current educational gaps and offering a comparative analysis, this research aims to enrich the construction management education experience and pave the way for broader adoption of sustainable practices in the industry. The envisioned curriculum framework is designed for versatile integration, catering to undergraduate programs and industry training modules, thereby enhancing the educational landscape for aspiring construction professionals. Ultimately, this study underscores the importance of proactive educational strategies in preparing industry professionals for the evolving demands of the construction landscape, facilitating a seamless transition towards sustainable building practices.

Keywords: curriculum framework, mass timber construction, physical vs. virtual prototypes, sustainable building practices

Procedia PDF Downloads 46
246 Barriers to Social Entrepreneurship by Refugees: An Explorative Study How Prior Experience Influences Social Orientation

Authors: D. M. Koers, A. J. Groen, P. D. Englis, R. Harms

Abstract:

We are witnessing the largest level of displacement of people since World War II. Refugees want to become independent as quickly as possible and build a new, safe future; however, access to the labor market is difficult and they face many problems that are not easily solved. This makes self-employment including social entrepreneurship a valuable alternative. Our research studied refugee-based entrepreneurship and examined whether prior knowledge, unmet personal needs and contextual factors influence how refugees recognize opportunities and if this influences their social orientation. In addition, we examine the barriers refugees face when starting up a company in the Netherlands. We use a case study design with a mixed-method approach, combining in-depth interviews and survey data. Data was collected from two Dutch entrepreneurial training programs in the Netherlands. We have a sample size of 27 latent refugee entrepreneurs. Our results show that refugees score high on the social entrepreneurial measures. They perceive themselves as having a strong social vision and are determined to defend a social need. They also score high on sustainability and state that their business ideas improve the quality of life on the long run. Based on these findings, we did not expect that only 5 participants had business ideas with a social orientation. In this group, 37,5% started a company before and 77.8% used their personal experience to come up with this business idea. Another 70,3% had the higher professional education or academic education. In the interviews, we found that they often copy and paste their gained experience from a previous profession on their new context and expect that it would work well. The social aspect lies in their cultural values and personal beliefs but is not reflected in their business models. One of the reasons could be that the context in which the refugee operates as a moderator suppressing the social mission and social value creation opportunities. Refugees are first and foremost focused on their survival. They do not want to be on social welfare and feel a strong need to be independent. Since they cannot access the labor market easily and face labor market discrimination they want to start a company. Another factor that explains lack of the social orientation in their business ideas is that social entrepreneurship is not a known concept in their home countries. Their idea of entrepreneurship differs substantially. We found that a huge barrier for refugees is their expectations about setting up a business, which are often not realistic because they have little knowledge about the system, institutions and corresponding red tape. In those instances, can the institutional configuration of a country, cultural differences, and perspective on entrepreneurship hinders social entrepreneurship. In conclusion, there might be latent potential for social entrepreneurship in refugees but there are many barriers to overcome. Overcoming these barriers can enhance local communities and enhance integration. In addition it has a positive financial impact on the host country because it reduces the pressure on the social system and stimulate the economy.

Keywords: immigrant entrepreneurship, refugee entrepreneurship, social entrepreneurship, prior experience, opportunity recognition

Procedia PDF Downloads 146
245 Leveraging Multimodal Neuroimaging Techniques to in vivo Address Compensatory and Disintegration Patterns in Neurodegenerative Disorders: Evidence from Cortico-Cerebellar Connections in Multiple Sclerosis

Authors: Efstratios Karavasilis, Foteini Christidi, Georgios Velonakis, Agapi Plousi, Kalliopi Platoni, Nikolaos Kelekis, Ioannis Evdokimidis, Efstathios Efstathopoulos

Abstract:

Introduction: Advanced structural and functional neuroimaging techniques contribute to the study of anatomical and functional brain connectivity and its role in the pathophysiology and symptoms’ heterogeneity in several neurodegenerative disorders, including multiple sclerosis (MS). Aim: In the present study, we applied multiparametric neuroimaging techniques to investigate the structural and functional cortico-cerebellar changes in MS patients. Material: We included 51 MS patients (28 with clinically isolated syndrome [CIS], 31 with relapsing-remitting MS [RRMS]) and 51 age- and gender-matched healthy controls (HC) who underwent MRI in a 3.0T MRI scanner. Methodology: The acquisition protocol included high-resolution 3D T1 weighted, diffusion-weighted imaging and echo planar imaging sequences for the analysis of volumetric, tractography and functional resting state data, respectively. We performed between-group comparisons (CIS, RRMS, HC) using CAT12 and CONN16 MATLAB toolboxes for the analysis of volumetric (cerebellar gray matter density) and functional (cortico-cerebellar resting-state functional connectivity) data, respectively. Brainance suite was used for the analysis of tractography data (cortico-cerebellar white matter integrity; fractional anisotropy [FA]; axial and radial diffusivity [AD; RD]) to reconstruct the cerebellum tracts. Results: Patients with CIS did not show significant gray matter (GM) density differences compared with HC. However, they showed decreased FA and increased diffusivity measures in cortico-cerebellar tracts, and increased cortico-cerebellar functional connectivity. Patients with RRMS showed decreased GM density in cerebellar regions, decreased FA and increased diffusivity measures in cortico-cerebellar WM tracts, as well as a pattern of increased and mostly decreased functional cortico-cerebellar connectivity compared to HC. The comparison between CIS and RRMS patients revealed significant GM density difference, reduced FA and increased diffusivity measures in WM cortico-cerebellar tracts and increased/decreased functional connectivity. The identification of decreased WM integrity and increased functional cortico-cerebellar connectivity without GM changes in CIS and the pattern of decreased GM density decreased WM integrity and mostly decreased functional connectivity in RRMS patients emphasizes the role of compensatory mechanisms in early disease stages and the disintegration of structural and functional networks with disease progression. Conclusions: In conclusion, our study highlights the added value of multimodal neuroimaging techniques for the in vivo investigation of cortico-cerebellar brain changes in neurodegenerative disorders. An extension and future opportunity to leverage multimodal neuroimaging data inevitably remain the integration of such data in the recently-applied mathematical approaches of machine learning algorithms to more accurately classify and predict patients’ disease course.

Keywords: advanced neuroimaging techniques, cerebellum, MRI, multiple sclerosis

Procedia PDF Downloads 120
244 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses

Authors: Ayon Mukherjee

Abstract:

Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.

Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability

Procedia PDF Downloads 146
243 Neonatology Clinical Routine in Cats and Dogs: Cases, Main Conditions and Mortality

Authors: Maria L. G. Lourenço, Keylla H. N. P. Pereira, Viviane Y. Hibaru, Fabiana F. Souza, João C. P. Ferreira, Simone B. Chiacchio, Luiz H. A. Machado

Abstract:

The neonatal care of cats and dogs represents a challenge to veterinarians due to the small size of the newborns and their physiological particularities. In addition, many Veterinary Medicine colleges around the world do not include neonatology in the curriculum, which makes it less likely for the veterinarian to have basic knowledge regarding neonatal care and worsens the clinical care these patients receive. Therefore, lack of assistance and negligence have become frequent in the field, which contributes towards the high mortality rates. This study aims at describing cases and the main conditions pertaining to the neonatology clinical routine in cats and dogs, highlighting the importance of specialized care in this field of Veterinary Medicine. The study included 808 neonates admitted to the São Paulo State University (UNESP) Veterinary Hospital, Botucatu, São Paulo, Brazil, between January 2018 and November 2019. Of these, 87.3% (705/808) were dogs and 12.7% (103/808) were cats. Among the neonates admitted, 57.3% (463/808) came from emergency c-sections due to dystocia, 8.7% (71/808) cane from vaginal deliveries with obstetric maneuvers due to dystocia, and 34% (274/808) were admitted for clinical care due to neonatal conditions. Among the neonates that came from emergency c-sections and vaginal deliveries, 47.3% (253/534) was born in respiratory distress due to severe hypoxia or persistent apnea and required resuscitation procedure, such as the Jen Chung acupuncture point (VG26), oxygen therapy with mask, pulmonary expansion with resuscitator, heart massages and administration of emergency medication, such as epinephrine. On the other hand, in the neonatal clinical care, the main conditions and alterations observed in the newborns were omphalophlebitis, toxic milk syndrome, neonatal conjunctivitis, swimmer puppy syndrome, neonatal hemorrhagic syndrome, pneumonia, trauma, low weight at birth, prematurity, congenital malformations (cleft palate, cleft lip, hydrocephaly, anasarca, vascular anomalies in the heart, anal atresia, gastroschisis, omphalocele, among others), neonatal sepsis and other local and systemic bacterial infections, viral infections (feline respiratory complex, parvovirus, canine distemper, canine infectious traqueobronchitis), parasitical infections (Toxocara spp., Ancylostoma spp., Strongyloides spp., Cystoisospora spp., Babesia spp. and Giardia spp.) and fungal infections (dermatophytosis by Microsporum canis). The most common clinical presentation observed was the neonatal triad (hypothermia, hypoglycemia and dehydration), affecting 74.6% (603/808) of the patients. The mortality rate among the neonates was 10.5% (85/808). Being knowledgeable about neonatology is essential for veterinarians to provide adequate care for these patients in the clinical routine. Adding neonatology to college curriculums, improving the dissemination of information on the subject, and providing annual training in neonatology for veterinarians and employees are important to improve immediate care and reduce the mortality rates.

Keywords: neonatal care, puppies, neonatal, conditions

Procedia PDF Downloads 207
242 Lignin Valorization: Techno-Economic Analysis of Three Lignin Conversion Routes

Authors: Iris Vural Gursel, Andrea Ramirez

Abstract:

Effective utilization of lignin is an important mean for developing economically profitable biorefineries. Current literature suggests that large amounts of lignin will become available in second generation biorefineries. New conversion technologies will, therefore, be needed to carry lignin transformation well beyond combustion to produce energy, but towards high-value products such as chemicals and transportation fuels. In recent years, significant progress on catalysis has been made to improve transformation of lignin, and new catalytic processes are emerging. In this work, a techno-economic assessment of two of these novel conversion routes and comparison with more established lignin pyrolysis route were made. The aim is to provide insights into the potential performance and potential hotspots in order to guide the experimental research and ease the commercialization by early identifying cost drivers, strengths, and challenges. The lignin conversion routes selected for detailed assessment were: (non-catalytic) lignin pyrolysis as the benchmark, direct hydrodeoxygenation (HDO) of lignin and hydrothermal lignin depolymerisation. Products generated were mixed oxygenated aromatic monomers (MOAMON), light organics, heavy organics, and char. For the technical assessment, a basis design followed by process modelling in Aspen was done using experimental yields. A design capacity of 200 kt/year lignin feed was chosen that is equivalent to a 1 Mt/y scale lignocellulosic biorefinery. The downstream equipment was modelled to achieve the separation of the product streams defined. For determining external utility requirement, heat integration was considered and when possible gasses were combusted to cover heating demand. The models made were used in generating necessary data on material and energy flows. Next, an economic assessment was carried out by estimating operating and capital costs. Return on investment (ROI) and payback period (PBP) were used as indicators. The results of the process modelling indicate that series of separation steps are required. The downstream processing was found especially demanding in the hydrothermal upgrading process due to the presence of significant amount of unconverted lignin (34%) and water. Also, external utility requirements were found to be high. Due to the complex separations, hydrothermal upgrading process showed the highest capital cost (50 M€ more than benchmark). Whereas operating costs were found the highest for the direct HDO process (20 M€/year more than benchmark) due to the use of hydrogen. Because of high yields to valuable heavy organics (32%) and MOAMON (24%), direct HDO process showed the highest ROI (12%) and the shortest PBP (5 years). This process is found feasible with a positive net present value. However, it is very sensitive to the prices used in the calculation. The assessments at this stage are associated with large uncertainties. Nevertheless, they are useful for comparing alternatives and identifying whether a certain process should be given further consideration. Among the three processes investigated here, the direct HDO process was seen to be the most promising.

Keywords: biorefinery, economic assessment, lignin conversion, process design

Procedia PDF Downloads 244
241 The Role of Uterine Artery Embolization in the Management of Postpartum Hemorrhage

Authors: Chee Wai Ku, Pui See Chin

Abstract:

As an emerging alternative to hysterectomy, uterine artery embolization (UAE) has been widely used in the management of fibroids and in controlling postpartum hemorrhage (PPH) unresponsive to other therapies. Research has shown UAE to be a safe, minimally invasive procedure with few complications and minimal effects on future fertility. We present two cases highlighting the use of UAE in preventing PPH in a patient with a large fibroid at the time of cesarean section and in the treatment of secondary PPH refractory to other therapies in another patient. We present a 36-year primiparous woman who booked at 18+6 weeks gestation with a 13.7 cm subserosal fibroid at the lower anterior wall of the uterus near the cervix and a 10.8 cm subserosal fibroid in the left wall. Prophylactic internal iliac artery occlusion balloons were placed prior to the planned classical midline cesarean section. The balloons were inflated once the baby was delivered. Bilateral uterine arteries were embolized subsequently. The estimated blood loss (EBL) was 400 mls and hemoglobin (Hb) remained stable at 10 g/DL. Ultrasound scan 2 years postnatally showed stable uterine fibroids 10.4 and 7.1 cm, which was significantly smaller than before. We present the second case of a 40-year-old G2P1 with a previous cesarean section for failure to progress. There were no antenatal problems, and the placenta was not previa. She presented with term labour and underwent an emergency cesarean section for failed vaginal birth after cesarean. Intraoperatively extensive adhesions were noted with bladder drawn high, and EBL was 300 mls. Postpartum recovery was uneventful. She presented with secondary PPH 3 weeks later complicated by hypovolemic shock. She underwent an emergency examination under anesthesia and evacuation of the uterus, with EBL 2500mls. Histology showed decidua with chronic inflammation. She was discharged well with no further PPH. She subsequently returned one week later for secondary PPH. Bedside ultrasound showed that the endometrium was thin with no evidence of retained products of conception. Uterotonics were administered, and examination under anesthesia was performed, with uterine Bakri balloon and vaginal pack insertion after. EBL was 1000 mls. There was no definite cause of PPH with no uterine atony or products of conception. To evaluate a potential cause, pelvic angiogram and super selective left uterine arteriogram was performed which showed profuse contrast extravasation and acute bleeding from the left uterine artery. Superselective embolization of the left uterine artery was performed. No gross contrast extravasation from the right uterine artery was seen. These two cases demonstrated the superior efficacy of UAE. Firstly, the prophylactic use of intra-arterial balloon catheters in pregnant patients with large fibroids, and secondly, in the diagnosis and management of secondary PPH refractory to uterotonics and uterine tamponade. In both cases, the need for laparotomy hysterectomy was avoided, resulting in the preservation of future fertility. UAE should be a consideration for hemodynamically stable patients in centres with access to interventional radiology.

Keywords: fertility preservation, secondary postpartum hemorrhage, uterine embolization, uterine fibroids

Procedia PDF Downloads 168
240 Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance

Authors: George Zhou, Yunchan Chen, Candace Chien

Abstract:

Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.

Keywords: arteriovenous fistula, blood flow sounds, metadata encoding, deep learning

Procedia PDF Downloads 62