Search results for: image skeletonizing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2769

Search results for: image skeletonizing

69 Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance

Authors: George Zhou, Yunchan Chen, Candace Chien

Abstract:

Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.

Keywords: arteriovenous fistula, blood flow sounds, metadata encoding, deep learning

Procedia PDF Downloads 88
68 Empirical Study of Innovative Development of Shenzhen Creative Industries Based on Triple Helix Theory

Authors: Yi Wang, Greg Hearn, Terry Flew

Abstract:

In order to understand how cultural innovation occurs, this paper explores the interaction in Shenzhen of China between universities, creative industries, and government in creative economic using the Triple Helix framework. During the past two decades, Triple Helix has been recognized as a new theory of innovation to inform and guide policy-making in national and regional development. Universities and governments around the world, especially in developing countries, have taken actions to strengthen connections with creative industries to develop regional economies. To date research based on the Triple Helix model has focused primarily on Science and Technology collaborations, largely ignoring other fields. Hence, there is an opportunity for work to be done in seeking to better understand how the Triple Helix framework might apply in the field of creative industries and what knowledge might be gleaned from such an undertaking. Since the late 1990s, the concept of ‘creative industries’ has been introduced as policy and academic discourse. The development of creative industries policy by city agencies has improved city wealth creation and economic capital. It claims to generate a ‘new economy’ of enterprise dynamics and activities for urban renewal through the arts and digital media, via knowledge transfer in knowledge-based economies. Creative industries also involve commercial inputs to the creative economy, to dynamically reshape the city into an innovative culture. In particular, this paper will concentrate on creative spaces (incubators, digital tech parks, maker spaces, art hubs) where academic, industry and government interact. China has sought to enhance the brand of their manufacturing industry in cultural policy. It aims to transfer the image of ‘Made in China’ to ‘Created in China’ as well as to give Chinese brands more international competitiveness in a global economy. Shenzhen is a notable example in China as an international knowledge-based city following this path. In 2009, the Shenzhen Municipal Government proposed the city slogan ‘Build a Leading Cultural City”’ to show the ambition of government’s strong will to develop Shenzhen’s cultural capacity and creativity. The vision of Shenzhen is to become a cultural innovation center, a regional cultural center and an international cultural city. However, there has been a lack of attention to the triple helix interactions in the creative industries in China. In particular, there is limited knowledge about how interactions in creative spaces co-location within triple helix networks significantly influence city based innovation. That is, the roles of participating institutions need to be better understood. Thus, this paper discusses the interplay between university, creative industries and government in Shenzhen. Secondary analysis and documentary analysis will be used as methods in an effort to practically ground and illustrate this theoretical framework. Furthermore, this paper explores how are creative spaces being used to implement Triple Helix in creative industries. In particular, the new combination of resources generated from the synthesized consolidation and interactions through the institutions. This study will thus provide an innovative lens to understand the components, relationships and functions that exist within creative spaces by applying Triple Helix framework to the creative industries.

Keywords: cultural policy, creative industries, creative city, triple Helix

Procedia PDF Downloads 206
67 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations

Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos

Abstract:

Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.

Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest

Procedia PDF Downloads 177
66 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs

Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.

Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour

Procedia PDF Downloads 94
65 Statistical Models and Time Series Forecasting on Crime Data in Nepal

Authors: Dila Ram Bhandari

Abstract:

Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.

Keywords: time series analysis, forecasting, ARIMA, machine learning

Procedia PDF Downloads 164
64 Modern Detection and Description Methods for Natural Plants Recognition

Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert

Abstract:

Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.

Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT

Procedia PDF Downloads 276
63 Interactively Developed Capabilities for Environmental Management Systems: An Exploratory Investigation of SMEs

Authors: Zhuang Ma, Zihan Zhang, Yu Li

Abstract:

Environmental concerns from stakeholders (e.g., governments & customers) have pushed firms to integrate environmental management systems into business processes such as R&D, manufacturing, and marketing. Environmental systems include managing environmental risks and pollution control (e.g., air pollution control, waste-water treatment, noise control, energy recycling & solid waste treatment) through raw material management, the elimination and reduction of contaminants, recycling, and reuse in firms' operational processes. Despite increasing studies on firms' proactive adoption of environmental management, their focus is primarily on large corporations operating in developed economies. Investigations in the environmental management efforts of small and medium-sized enterprises (SMEs) are scarce. This is problematic for SMEs because, unlike large corporations, SMEs have limited awareness, resources, capabilities to adapt their operational routines to address environmental impacts. The purpose of this study is to explore how SMEs develop organizational capabilities through interactions with business partners (e.g., environmental management specialists & customers). Drawing on the resource-based view (RBV) and an organizational capabilities perspective, this study investigates the interactively developed capabilities that allow SMEs to adopt environmental management systems. Using an exploratory approach, the study includes 12 semi-structured interviews with senior managers from four SMEs, two environmental management specialists, and two customers in the pharmaceutical sector in Chongqing, China. Findings of this study include four key organizational capabilities: 1) ‘dynamic marketing’ capability, which allows SMEs to recoup the investments in environmental management systems by developing environmentally friendly products to address customers' ever-changing needs; 2) ‘process improvement’ capability, which allows SMEs to select and adopt the latest technologies from biology, chemistry, new material, and new energy sectors into the production system for improved environmental performance and cost-reductions; and 3) ‘relationship management’ capability which allows SMEs to improve corporate image among the public, social media, government agencies, and customers, who in turn help SMEs to overcome their competitive disadvantages. These interactively developed capabilities help SMEs to address larger competitors' foothold in the local market, reduce market constraints, and exploit competitive advantages in other regions (e.g., Guangdong & Jiangsu) of China. These findings extend the RBV and organizational capabilities perspective; that is, SMEs can develop the essential resources and capabilities required for environmental management through interactions with upstream and downstream business partners. While a limited number of studies did highlight the importance of interactions among SMEs, customers, suppliers, NGOs, industrial associations, and consulting firms, they failed to explore the specific capabilities developed through these interactions. Additionally, the findings can explain how a proactive adoption of environmental management systems could help some SMEs to overcome the institutional and market restraints on their products, thereby springboarding into larger, more environmentally demanding, yet more profitable markets compared with their existing market.

Keywords: capabilities, environmental management systems, interactions, SMEs

Procedia PDF Downloads 180
62 Vascular Targeted Photodynamic Therapy Monitored by Real-Time Laser Speckle Imaging

Authors: Ruth Goldschmidt, Vyacheslav Kalchenko, Lilah Agemy, Rachel Elmoalem, Avigdor Scherz

Abstract:

Vascular Targeted Photodynamic therapy (VTP) is a new modality for selective cancer treatment that leads to the complete tumor ablation. A photosensitizer, a bacteriochlorophyll derivative in our case, is first administered to the patient and followed by the illumination of the tumor area, by a near-IR laser for its photoactivation. The photoactivated drug releases reactive oxygen species (ROS) in the circulation, which reacts with blood cells and the endothelium leading to the occlusion of the blood vasculature. If the blood vessels are only partially closed, the tumor may recover, and cancer cells could survive. On the other hand, excessive treatment may lead to toxicity of healthy tissues nearby. Simultaneous VTP monitoring and image processing independent of the photoexcitation laser has not yet been reported, to our knowledge. Here we present a method for blood flow monitoring, using a real-time laser speckle imaging (RTLSI) in the tumor during VTP. We have synthesized over the years a library of bacteriochlorophyll derivatives, among them WST11 and STL-6014. Both are water soluble derivatives that are retained in the blood vasculature through their partial binding to HSA. WST11 has been approved in Mexico for VTP treatment of prostate cancer at a certain drug dose, and time/intensity of illumination. Application to other bacteriochlorophyll derivatives or other cancers may require different treatment parameters (such as light/drug administration). VTP parameters for STL-6014 are still under study. This new derivative mainly differs from WST11 by its lack of the central Palladium, and its conjugation to an Arg-Gly-Asp (RGD) sequence. RGD is a tumor-specific ligand that is used for targeting the necrotic tumor domains through its affinity to αVβ3 integrin receptors. This enables the study of cell-targeted VTP. We developed a special RTLSI module, based on Labview software environment for data processing. The new module enables to acquire raw laser speckle images and calculate the values of the laser temporal statistics of time-integrated speckles in real time, without additional off-line processing. Using RTLSI, we could monitor the tumor’s blood flow following VTP in a CT26 colon carcinoma ear model. VTP with WST11 induced an immediate slow down of the blood flow within the tumor and a complete final flow arrest, after some sporadic reperfusions. If the irradiation continued further, the blood flow stopped also in the blood vessels of the surrounding healthy tissue. This emphasizes the significance of light dose control. Using our RTLSI system, we could prevent any additional healthy tissue damage by controlling the illumination time and restrict blood flow arrest within the tumor only. In addition, we found that VTP with STL-6014 was the most effective when the photoactivation was conducted 4h post-injection, in terms of tumor ablation success in-vivo and blood vessel flow arrest. In conclusion, RTSLI application should allow to optimize VTP efficacy vs. toxicity in both the preclinical and clinical arenas.

Keywords: blood vessel occlusion, cancer treatment, photodynamic therapy, real time imaging

Procedia PDF Downloads 223
61 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Differenced Normalized Burnt Ratio and Neural Network Approach

Authors: Sunil Chandra, Himanshu Rawat, Vikas Gusain, Triparna Barman

Abstract:

Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within reserved forests, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differenced normalized burnt ratio (dNBR) index approach that uses the burnt ratio values generated using the Short-Wave Infrared (SWIR) band and Near Infrared (NIR) bands of the Sentinel-2 image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel-2 bands. The training and testing data are generated from the Sentinel-2 data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated using spectral unmixing methods, which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.

Keywords: categorical data, log linear modeling, neural network, shifting cultivation

Procedia PDF Downloads 54
60 The Influence of Perinatal Anxiety and Depression on Breastfeeding Behaviours: A Qualitative Systematic Review

Authors: Khulud Alhussain, Anna Gavine, Stephen Macgillivray, Sushila Chowdhry

Abstract:

Background: Estimates show that by the year 2030, mental illness will account for more than half of the global economic burden, second to non-communicable diseases. Often, the perinatal period is characterised by psychological ambivalence and a mixed anxiety-depressive condition. Maternal mental disorder is associated with perinatal anxiety and depression and affects breastfeeding behaviors. Studies also indicate that maternal mental health can considerably influence a baby's health in numerous aspects and impact the newborn health due to lack of adequate breastfeeding. However, studies reporting factors associated with breastfeeding behaviors are predominantly quantitative. Therefore, it is not clear what literature is available to understand the factors affecting breastfeeding and perinatal women’s perspectives and experiences. Aim: This review aimed to explore the perceptions and experiences of women with perinatal anxiety and depression, as well as how these experiences influence their breastfeeding behaviours. Methods: A systematic literature review of qualitative studies in line with the Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ). Four electronic databases (CINAHL, PsycINFO, Embase, and Google Scholar) were explored for relevant studies using a search strategy. The search was restricted to studies published in the English language between 2000 and 2022. Findings from the literature were screened using a pre-defined screening criterion and the quality of eligible studies was appraised using the Walsh and Downe (2006) checklist. Findings were extracted and synthesised based on Braun and Clark. The review protocol was registered on PROSPERO (Ref: CRD42022319609). Result: A total of 4947 studies were identified from the four databases. Following duplicate removal and screening 16 studies met the inclusion criteria. The studies included 87 pregnant and 302 post-partum women from 12 countries. The participants were from a variety of economic, regional, and religious backgrounds, mainly from the age of 18 to 45 years old. Three main themes were identified: Barriers to breastfeeding, breastfeeding facilitators, emotional disturbance, and breastfeeding. Seven subthemes emerged from the data: expectation versus reality, uncertainly about maternal competencies, body image and breastfeeding, lack of sufficient breastfeeding support for family and caregivers’ support, influences positive breastfeeding practices, breastfeeding education, and causes of mental strain among breastfeeding women. Breastfeeding duration is affected in women with mental health disorders, irrespective of their desire to breastfeed. Conclusion: There is significant empirical evidence that breastfeeding behaviour and perinatal mental disturbance are linked. However, there is a lack of evidence to apply the findings to Saudi women due to lack of empirical qualitative information. To improve the psychological well-being of mothers, it is crucial to explore and recognise any concerns with their mental, physical, and emotional well-being. Therefore, robust research is needed so that breastfeeding intervention researchers and policymakers can focus on specifically what needs to be done to help mentally distressed perinatal women and their new-born.

Keywords: pregnancy, perinatal period, anxiety, depression, emotional disturbance, breastfeeding

Procedia PDF Downloads 98
59 Consumers and Voters’ Choice: Two Different Contexts with a Powerful Behavioural Parallel

Authors: Valentina Dolmova

Abstract:

What consumers choose to buy and who voters select on election days are two questions that have captivated the interest of both academics and practitioners for many decades. The importance of understanding what influences the behavior of those groups and whether or not we can predict or control it fuels a steady stream of research in a range of fields. By looking only at the past 40 years, more than 70 thousand scientific papers have been published in each field – consumer behavior and political psychology, respectively. From marketing, economics, and the science of persuasion to political and cognitive psychology - we have all remained heavily engaged. The ever-evolving technology, inevitable socio-cultural shifts, global economic conditions, and much more play an important role in choice-equations regardless of context. On one hand, this makes the research efforts always relevant and needed. On the other, the relatively low number of cross-field collaborations, which seem to be picking up only in more in recent years, makes the existing findings isolated into framed bubbles. By performing systematic research across both areas of psychology and building a parallel between theories and factors of influence, however, we find that there is not only a definitive common ground between the behaviors of consumers and voters but that we are moving towards a global model of choice. This means that the lines between contexts are fading which has a direct implication on what we should focus on when predicting or navigating buyers and voters’ behavior. Internal and external factors in four main categories determine the choices we make as consumers and as voters. Together, personal, psychological, social, and cultural create a holistic framework through which all stimuli in relation to a particular product or a political party get filtered. The analogy “consumer-voter” solidifies further. Leading academics suggest that this fundamental parallel is the key to managing successfully political and consumer brands alike. However, we distinguish additional four key stimuli that relate to those factor categories (1/ opportunity costs; 2/the memory of the past; 3/recognisable figures/faces and 4/conflict) arguing that the level of expertise a person has determines the prevalence of factors or specific stimuli. Our efforts take into account global trends such as the establishment of “celebrity politics” and the image of “ethically concerned consumer brands” which bridge the gap between contexts to an even greater extent. Scientists and practitioners are pushed to accept the transformative nature of both fields in social psychology. Existing blind spots as well as the limited number of research conducted outside the American and European societies open up space for more collaborative efforts in this highly demanding and lucrative field. A mixed method of research tests three main hypotheses, the first two of which are focused on the level of irrelevance of context when comparing voting or consumer behavior – both from the factors and stimuli lenses, the third on determining whether or not the level of expertise in any field skews the weight of what prism we are more likely to choose when evaluating options.

Keywords: buyers’ behaviour, decision-making, voters’ behaviour, social psychology

Procedia PDF Downloads 154
58 Visual Representation of Ancient Chinese Rites with Digitalization Technology: A Case of Confucius Worship Ceremony

Authors: Jihong Liang, Huiling Feng, Linqing Ma, Tianjiao Qi

Abstract:

Confucius is the first sage in Chinese culture. Confucianism, the theories represented by Confucius, has long been at the core of Chinese traditional society, as the dominating political ideology of centralized feudal monarchy for more than two thousand years. Confucius Worship Ceremony held in the Confucian Temple in Qufu (Confucius’s birthplace), which is dedicated to commemorate Confucius and other 170 elites in Confucianism with a whole set of formal rites, pertains to “Auspicious Rites”, which worship heaven and earth, humans and ghosts. It was first a medium-scaled ritual activity but then upgraded to the supreme one at national level in the Qing Dynasty. As a national event, it was celebrated by Emperor as well as common intellectuals in traditional China. The Ceremony can be solemn and respectful, with prescribed and complicated procedures, well-prepared utensil and matched offerings operated in rhythm with music and dances. Each participant has his place, and everyone follows the specified rules. This magnificent ritual Ceremony, while embedded with rich culture connotation, actually symbolizes the social acknowledgment for orthodox culture represented by Confucianism. Rites reflected in this Ceremony, is one of the most important features of Chinese culture, serving as the key bond in the identification and continuation of Chinese culture. These rites and ritual ceremonies, as culture memories themselves, are not only treasures of China, but of the whole world. However, while the ancient Chinese Rite has been one of the thorniest and most complicated topics for academics, the more regrettable is that due to their interruption in practice and historical changes, these rites and ritual ceremonies have already become a vague language in today’s academic discourse and strange terms of the past for common people. Luckily, we, today, by virtue of modern digital technology, may be able to reproduce these ritual ceremonies, as most of them can still be found in ancient manuscripts, through which Chinese ancestors tell the beauty and gravity of their dignified rites and more importantly, their spiritual pursuits with vivid language and lively pictures. This research, based on review and interpretation of the ancient literature, intends to construct the ancient ritual ceremonies, with the Confucius Worship Ceremony as a case and by use of digital technology. Using 3D technology, the spatial scenes in the Confucian Temple can be reconstructed by virtual reality; the memorial tablet exhibited in the temple by GIS and different rites in the ceremonies by animation technology. With reference to the lyrics, melodies and lively pictures recorded in ancient scripts, it is also possible to reproduce the live dancing site. Also, image rendering technology can help to show the life experience and accomplishments of Confucius. Finally, lining up all the elements in a multimedia narrative form, a complete digitalized Confucius Worship Ceremony can be reproduced, which will provide an excellent virtual experience that goes beyond time and space by bringing its audience back to that specific historical time. This digital project, once completed, will play an important role in the inheritance and dissemination of cultural heritage.

Keywords: Confucius worship ceremony, multimedia narrative form, GIS, visual representation

Procedia PDF Downloads 260
57 The Greek Revolution Through the Foreign Press. The Case of the Newspaper "The London Times" In the Period 1821-1828

Authors: Euripides Antoniades

Abstract:

In 1821 the Greek Revolution movement, under the political influence that arose from the French revolution, and the corresponding movements in Italy, Germany and America, requested the liberation of the nation and the establishment of an independent national state. Published topics in the British press regarding the Greek Revolution, focused on : a) the right of the Greeks to claim their freedom from Turkish domination in order to establish an independent state based on the principle of national autonomy, b) criticism regarding Turkish rule as illegal and the power of the Ottoman Sultan as arbitrary, c) the recognition of the Greek identity and its distinction from the Turkish one and d) the endorsement Greeks as the descendants of ancient Greeks. The advantage of newspaper as a media is sharing information and ideas and dealing with issues in greater depth and detail, unlike other media, such as radio or television. The London Times is a print publication that presents, in chronological or thematic order, the news, opinions or announcements about the most important events that have occurred in a place during a specified period of time. This paper employs the rich archive of The London Times archive by quoting extracts from publications of that period, to convey the British public perspective regarding the Greek Revolution from its beginning until the London Protocol of 1828. Furthermore, analyses the publications of the British newspaper in terms of the number of references to the Greek revolution, front page and editorial references as well as the size of publications on the revolution during the period 1821-1828. A combination of qualitative and quantitative content analysis was applied. An attempt was made to record Greek Revolution references along with the usage of specific words and expressions that contribute to the representation of the historical events and their exposure to the reading public. Key finds of this research reveal that a) there was a frequency of passionate daily articles concerning the events in Greece, their length, and context in The London Times, b) the British public opinion was influenced by this particular newspaper and c) the newspaper published various news about the revolution by adopting the role of animator of the Greek struggle. For instance, war events and the battles of Wallachin and Moldavia, Hydra, Crete, Psara, Mesollogi, Peloponnese were presented not only for informing the readers but for promoting the essential need for freedom and the establishment of an independent Greek state. In fact, this type of news was the main substance of the The London Times’ structure, establishing a positive image about the Greek Revolution contributing to the European diplomatic development such as the standpoint of France, - that did not wish to be detached from the conclusions regarding the English loans and the death of Alexander I of Russia and his succession by the ambitious Nicholas. These factors offered a change in the attitude of the British and Russians respectively assuming a positive approach towards Greece. The Great Powers maintained a neutral position in the Greek-Ottoman conflict, same time they engaged in Greek power increasement by offering aid.

Keywords: Greece, revolution, newspaper, the London times, London, great britain, mass media

Procedia PDF Downloads 90
56 Connectomic Correlates of Cerebral Microhemorrhages in Mild Traumatic Brain Injury Victims with Neural and Cognitive Deficits

Authors: Kenneth A. Rostowsky, Alexander S. Maher, Nahian F. Chowdhury, Andrei Irimia

Abstract:

The clinical significance of cerebral microbleeds (CMBs) due to mild traumatic brain injury (mTBI) remains unclear. Here we use magnetic resonance imaging (MRI), diffusion tensor imaging (DTI) and connectomic analysis to investigate the statistical association between mTBI-related CMBs, post-TBI changes to the human connectome and neurological/cognitive deficits. This study was undertaken in agreement with US federal law (45 CFR 46) and was approved by the Institutional Review Board (IRB) of the University of Southern California (USC). Two groups, one consisting of 26 (13 females) mTBI victims and another comprising 26 (13 females) healthy control (HC) volunteers were recruited through IRB-approved procedures. The acute Glasgow Coma Scale (GCS) score was available for each mTBI victim (mean µ = 13.2; standard deviation σ = 0.4). Each HC volunteer was assigned a GCS of 15 to indicate the absence of head trauma at the time of enrollment in our study. Volunteers in the HC and mTBI groups were matched according to their sex and age (HC: µ = 67.2 years, σ = 5.62 years; mTBI: µ = 66.8 years, σ = 5.93 years). MRI [including T1- and T2-weighted volumes, gradient recalled echo (GRE)/susceptibility weighted imaging (SWI)] and gradient echo (GE) DWI volumes were acquired using the same MRI scanner type (Trio TIM, Siemens Corp.). Skull-stripping and eddy current correction were implemented. DWI volumes were processed in TrackVis (http://trackvis.org) and 3D Slicer (http://www.slicer.org). Tensors were fit to DWI data to perform DTI, and tractography streamlines were then reconstructed using deterministic tractography. A voxel classifier was used to identify image features as CMB candidates using Microbleed Anatomic Rating Scale (MARS) guidelines. For each peri-lesional DTI streamline bundle, the null hypothesis was formulated as the statement that there was no neurological or cognitive deficit associated with between-scan differences in the mean FA of DTI streamlines within each bundle. The statistical significance of each hypothesis test was calculated at the α = 0.05 level, subject to the family-wise error rate (FWER) correction for multiple comparisons. Results: In HC volunteers, the along-track analysis failed to identify statistically significant differences in the mean FA of DTI streamline bundles. In the mTBI group, significant differences in the mean FA of peri-lesional streamline bundles were found in 21 out of 26 volunteers. In those volunteers where significant differences had been found, these differences were associated with an average of ~47% of all identified CMBs (σ = 21%). In 12 out of the 21 volunteers exhibiting significant FA changes, cognitive functions (memory acquisition and retrieval, top-down control of attention, planning, judgment, cognitive aspects of decision-making) were found to have deteriorated over the six months following injury (r = -0.32, p < 0.001). Our preliminary results suggest that acute post-TBI CMBs may be associated with cognitive decline in some mTBI patients. Future research should attempt to identify mTBI patients at high risk for cognitive sequelae.

Keywords: traumatic brain injury, magnetic resonance imaging, diffusion tensor imaging, connectomics

Procedia PDF Downloads 170
55 Deep Learning in Chest Computed Tomography to Differentiate COVID-19 from Influenza

Authors: Hongmei Wang, Ziyun Xiang, Ying liu, Li Yu, Dongsheng Yue

Abstract:

Intro: The COVID-19 (Corona Virus Disease 2019) has greatly changed the global economic, political and financial ecology. The mutation of the coronavirus in the UK in December 2020 has brought new panic to the world. Deep learning was performed on Chest Computed tomography (CT) of COVID-19 and Influenza and describes their characteristics. The predominant features of COVID-19 pneumonia was ground-glass opacification, followed by consolidation. Lesion density: most lesions appear as ground-glass shadows, and some lesions coexist with solid lesions. Lesion distribution: the focus is mainly on the dorsal side of the periphery of the lung, with the lower lobe of the lungs as the focus, and it is often close to the pleura. Other features it has are grid-like shadows in ground glass lesions, thickening signs of diseased vessels, air bronchi signs and halo signs. The severe disease involves whole bilateral lungs, showing white lung signs, air bronchograms can be seen, and there can be a small amount of pleural effusion in the bilateral chest cavity. At the same time, this year's flu season could be near its peak after surging throughout the United States for months. Chest CT for Influenza infection is characterized by focal ground glass shadows in the lungs, with or without patchy consolidation, and bronchiole air bronchograms are visible in the concentration. There are patchy ground-glass shadows, consolidation, air bronchus signs, mosaic lung perfusion, etc. The lesions are mostly fused, which is prominent near the hilar and two lungs. Grid-like shadows and small patchy ground-glass shadows are visible. Deep neural networks have great potential in image analysis and diagnosis that traditional machine learning algorithms do not. Method: Aiming at the two major infectious diseases COVID-19 and influenza, which are currently circulating in the world, the chest CT of patients with two infectious diseases is classified and diagnosed using deep learning algorithms. The residual network is proposed to solve the problem of network degradation when there are too many hidden layers in a deep neural network (DNN). The proposed deep residual system (ResNet) is a milestone in the history of the Convolutional neural network (CNN) images, which solves the problem of difficult training of deep CNN models. Many visual tasks can get excellent results through fine-tuning ResNet. The pre-trained convolutional neural network ResNet is introduced as a feature extractor, eliminating the need to design complex models and time-consuming training. Fastai is based on Pytorch, packaging best practices for in-depth learning strategies, and finding the best way to handle diagnoses issues. Based on the one-cycle approach of the Fastai algorithm, the classification diagnosis of lung CT for two infectious diseases is realized, and a higher recognition rate is obtained. Results: A deep learning model was developed to efficiently identify the differences between COVID-19 and influenza using chest CT.

Keywords: COVID-19, Fastai, influenza, transfer network

Procedia PDF Downloads 142
54 A Short Dermatoscopy Training Increases Diagnostic Performance in Medical Students

Authors: Magdalena Chrabąszcz, Teresa Wolniewicz, Cezary Maciejewski, Joanna Czuwara

Abstract:

BACKGROUND: Dermoscopy is a clinical tool known to improve the early detection of melanoma and other malignancies of the skin. Over the past few years melanoma has grown into a disease of socio-economic importance due to the increasing incidence and persistently high mortality rates. Early diagnosis remains the best method to reduce melanoma and non-melanoma skin cancer– related mortality and morbidity. Dermoscopy is a noninvasive technique that consists of viewing pigmented skin lesions through a hand-held lens. This simple procedure increases melanoma diagnostic accuracy by up to 35%. Dermoscopy is currently the standard for clinical differential diagnosis of cutaneous melanoma and for qualifying lesion for the excision biopsy. Like any clinical tool, training is required for effective use. The introduction of small and handy dermoscopes contributed significantly to the switch of dermatoscopy toward a first-level useful tool. Non-dermatologist physicians are well positioned for opportunistic melanoma detection; however, education in the skin cancer examination is limited during medical school and traditionally lecture-based. AIM: The aim of this randomized study was to determine whether the adjunct of dermoscopy to the standard fourth year medical curriculum improves the ability of medical students to distinguish between benign and malignant lesions and assess acceptability and satisfaction with the intervention. METHODS: We performed a prospective study in 2 cohorts of fourth-year medical students at Medical University of Warsaw. Groups having dermatology course, were randomly assigned to:  cohort A: with limited access to dermatoscopy from their teacher only – 1 dermatoscope for 15 people  Cohort B: with a full access to use dermatoscopy during their clinical classes:1 dermatoscope for 4 people available constantly plus 15-minute dermoscopy tutorial. Students in both study arms got an image-based test of 10 lesions to assess ability to differentiate benign from malignant lesions and postintervention survey collecting minimal background information, attitudes about the skin cancer examination and course satisfaction. RESULTS: The cohort B had higher scores than the cohort A in recognition of nonmelanocytic (P < 0.05) and melanocytic (P <0.05) lesions. Medical students who have a possibility to use dermatoscope by themselves have also a higher satisfaction rates after the dermatology course than the group with limited access to this diagnostic tool. Moreover according to our results they were more motivated to learn dermatoscopy and use it in their future everyday clinical practice. LIMITATIONS: There were limited participants. Further study of the application on clinical practice is still needed. CONCLUSION: Although the use of dermatoscope in dermatology as a specialty is widely accepted, sufficiently validated clinical tools for the examination of potentially malignant skin lesions are lacking in general practice. Introducing medical students to dermoscopy in their fourth year curricula of medical school may improve their ability to differentiate benign from malignant lesions. It can can also encourage students to use dermatoscopy in their future practice which can significantly improve early recognition of malignant lesions and thus decrease melanoma mortality.

Keywords: dermatoscopy, early detection of melanoma, medical education, skin cancer

Procedia PDF Downloads 114
53 To Examine Perceptions and Associations of Shock Food Labelling and to Assess the Impact on Consumer Behaviour: A Quasi-Experimental Approach

Authors: Amy Heaps, Amy Burns, Una McMahon-Beattie

Abstract:

Shock and fear tactics have been used to encourage consumer behaviour change within the UK regarding lifestyle choices such as smoking and alcohol abuse, yet such measures have not been applied to food labels to encourage healthier purchasing decisions. Obesity levels are continuing to rise within the UK, despite efforts made by government and charitable bodies to encourage consumer behavioural changes, which will have a positive influence on their fat, salt, and sugar intake. We know that taking extreme measures to shock consumers into behavioural changes has worked previously; for example, the anti-smoking television adverts and new standardised cigarette and tobacco packaging have reduced the numbers of the UK adult population who smoke or encouraged those who are currently trying to quit. The USA has also introduced new front-of-pack labelling, which is clear, easy to read, and includes concise health warnings on products high in fat, salt, or sugar. This model has been successful, with consumers reducing purchases of products with these warning labels present. Therefore, investigating if shock labels would have an impact on UK consumer behaviour and purchasing decisions would help to fill the gap within this research field. This study aims to develop an understanding of consumer’s initial responses to shock advertising with an interest in the perceived impact of long-term effect shock advertising on consumer food purchasing decisions, behaviour, and attitudes and will achieve this through a mixed methodological approach taken with a sample size of 25 participants ages ranging from 22 and 60. Within this research, shock mock labels were developed, including a graphic image, health warning, and get-help information. These labels were made for products (available within the UK) with large market shares which were high in either fat, salt, or sugar. The use of online focus groups and mouse-tracking experiments results helped to develop an understanding of consumer’s initial responses to shock advertising with interest in the perceived impact of long-term effect shock advertising on consumer food purchasing decisions, behaviour, and attitudes. Preliminary results have shown that consumers believe that the use of graphic images, combined with a health warning, would encourage consumer behaviour change and influence their purchasing decisions regarding those products which are high in fat, salt and sugar. Preliminary main findings show that graphic mock shock labels may have an impact on consumer behaviour and purchasing decisions, which will, in turn, encourage healthier lifestyles. Focus group results show that 72% of participants indicated that these shock labels would have an impact on their purchasing decisions. During the mouse tracking trials, this increased to 80% of participants, showing that more exposure to shock labels may have a bigger impact on potential consumer behaviour and purchasing decision change. In conclusion, preliminary results indicate that graphic shock labels will impact consumer purchasing decisions. Findings allow for a deeper understanding of initial emotional responses to these graphic labels. However, more research is needed to test the longevity of these labels on consumer purchasing decisions, but this research exercise is demonstrably the foundation for future detailed work.

Keywords: consumer behavior, decision making, labelling legislation, purchasing decisions, shock advertising, shock labelling

Procedia PDF Downloads 67
52 Comparing Radiographic Detection of Simulated Syndesmosis Instability Using Standard 2D Fluoroscopy Versus 3D Cone-Beam Computed Tomography

Authors: Diane Ghanem, Arjun Gupta, Rohan Vijayan, Ali Uneri, Babar Shafiq

Abstract:

Introduction: Ankle sprains and fractures often result in syndesmosis injuries. Unstable syndesmotic injuries result from relative motion between the distal ends of the tibia and fibula, anatomic juncture which should otherwise be rigid, and warrant operative management. Clinical and radiological evaluations of intraoperative syndesmosis stability remain a challenging task as traditional 2D fluoroscopy is limited to a uniplanar translational displacement. The purpose of this pilot cadaveric study is to compare the 2D fluoroscopy and 3D cone beam computed tomography (CBCT) stress-induced syndesmosis displacements. Methods: Three fresh-frozen lower legs underwent 2D fluoroscopy and 3D CIOS CBCT to measure syndesmosis position before dissection. Syndesmotic injury was simulated by resecting the (1) anterior inferior tibiofibular ligament (AITFL), the (2) posterior inferior tibiofibular ligament (PITFL) and the inferior transverse ligament (ITL) simultaneously, followed by the (3) interosseous membrane (IOM). Manual external rotation and Cotton stress test were performed after each of the three resections and 2D and 3D images were acquired. Relevant 2D and 3D parameters included the tibiofibular overlap (TFO), tibiofibular clear space (TCS), relative rotation of the fibula, and anterior-posterior (AP) and medial-lateral (ML) translations of the fibula relative to the tibia. Parameters were measured by two independent observers. Inter-rater reliability was assessed by intraclass correlation coefficient (ICC) to determine measurement precision. Results: Significant mismatches were found in the trends between the 2D and 3D measurements when assessing for TFO, TCS and AP translation across the different resection states. Using 3D CBCT, TFO was inversely proportional to the number of resected ligaments while TCS was directly proportional to the latter across all cadavers and ‘resection + stress’ states. Using 2D fluoroscopy, this trend was not respected under the Cotton stress test. 3D AP translation did not show a reliable trend whereas 2D AP translation of the fibula was positive under the Cotton stress test and negative under the external rotation. 3D relative rotation of the fibula, assessed using the Tang et al. ratio method and Beisemann et al. angular method, suggested slight overall internal rotation with complete resection of the ligaments, with a change < 2mm - threshold which corresponds to the commonly used buffer to account for physiologic laxity as per clinical judgment of the surgeon. Excellent agreement (>0.90) was found between the two independent observers for each of the parameters in both 2D and 3D (overall ICC 0.9968, 95% CI 0.995 - 0.999). Conclusions: The 3D CIOS CBCT appears to reliably depict the trend in TFO and TCS. This might be due to the additional detection of relevant rotational malpositions of the fibula in comparison to the standard 2D fluoroscopy which is limited to a single plane translation. A better understanding of 3D imaging may help surgeons identify the precise measurements planes needed to achieve better syndesmosis repair.

Keywords: 2D fluoroscopy, 3D computed tomography, image processing, syndesmosis injury

Procedia PDF Downloads 70
51 Rapid, Automated Characterization of Microplastics Using Laser Direct Infrared Imaging and Spectroscopy

Authors: Andreas Kerstan, Darren Robey, Wesam Alvan, David Troiani

Abstract:

Over the last 3.5 years, Quantum Cascade Lasers (QCL) technology has become increasingly important in infrared (IR) microscopy. The advantages over fourier transform infrared (FTIR) are that large areas of a few square centimeters can be measured in minutes and that the light intensive QCL makes it possible to obtain spectra with excellent S/N, even with just one scan. A firmly established solution of the laser direct infrared imaging (LDIR) 8700 is the analysis of microplastics. The presence of microplastics in the environment, drinking water, and food chains is gaining significant public interest. To study their presence, rapid and reliable characterization of microplastic particles is essential. Significant technical hurdles in microplastic analysis stem from the sheer number of particles to be analyzed in each sample. Total particle counts of several thousand are common in environmental samples, while well-treated bottled drinking water may contain relatively few. While visual microscopy has been used extensively, it is prone to operator error and bias and is limited to particles larger than 300 µm. As a result, vibrational spectroscopic techniques such as Raman and FTIR microscopy have become more popular, however, they are time-consuming. There is a demand for rapid and highly automated techniques to measure particle count size and provide high-quality polymer identification. Analysis directly on the filter that often forms the last stage in sample preparation is highly desirable as, by removing a sample preparation step it can both improve laboratory efficiency and decrease opportunities for error. Recent advances in infrared micro-spectroscopy combining a QCL with scanning optics have created a new paradigm, LDIR. It offers improved speed of analysis as well as high levels of automation. Its mode of operation, however, requires an IR reflective background, and this has, to date, limited the ability to perform direct “on-filter” analysis. This study explores the potential to combine the filter with an infrared reflective surface filter. By combining an IR reflective material or coating on a filter membrane with advanced image analysis and detection algorithms, it is demonstrated that such filters can indeed be used in this way. Vibrational spectroscopic techniques play a vital role in the investigation and understanding of microplastics in the environment and food chain. While vibrational spectroscopy is widely deployed, improvements and novel innovations in these techniques that can increase the speed of analysis and ease of use can provide pathways to higher testing rates and, hence, improved understanding of the impacts of microplastics in the environment. Due to its capability to measure large areas in minutes, its speed, degree of automation and excellent S/N, the LDIR could also implemented for various other samples like food adulteration, coatings, laminates, fabrics, textiles and tissues. This presentation will highlight a few of them and focus on the benefits of the LDIR vs classical techniques.

Keywords: QCL, automation, microplastics, tissues, infrared, speed

Procedia PDF Downloads 66
50 The Impact of a Simulated Teaching Intervention on Preservice Teachers’ Sense of Professional Identity

Authors: Jade V. Rushby, Tony Loughland, Tracy L. Durksen, Hoa Nguyen, Robert M. Klassen

Abstract:

This paper reports a study investigating the development and implementation of an online multi-session ‘scenario-based learning’ (SBL) program administered to preservice teachers in Australia. The transition from initial teacher education to the teaching profession can present numerous cognitive and psychological challenges for early career teachers. Therefore, the identification of additional supports, such as scenario-based learning, that can supplement existing teacher education programs may help preservice teachers to feel more confident and prepared for the realities and complexities of teaching. Scenario-based learning is grounded in situated learning theory which holds that learning is most powerful when it is embedded within its authentic context. SBL exposes participants to complex and realistic workplace situations in a supportive environment and has been used extensively to help prepare students in other professions, such as legal and medical education. However, comparatively limited attention has been paid to investigating the effects of SBL in teacher education. In the present study, the SBL intervention provided participants with the opportunity to virtually engage with school-based scenarios, reflect on how they might respond to a series of plausible response options, and receive real-time feedback from experienced educators. The development process involved several stages, including collaboration with experienced educators to determine the scenario content based on ‘critical incidents’ they had encountered during their teaching careers, the establishment of the scoring key, the development of the expert feedback, and an extensive review process to refine the program content. The 4-part SBL program focused on areas that can be challenging in the beginning stages of a teaching career, including managing student behaviour and workload, differentiating the curriculum, and building relationships with colleagues, parents, and the community. Results from prior studies implemented by the research group using a similar 4-part format have shown a statistically significant increase in preservice teachers’ self-efficacy and classroom readiness from the pre-test to the final post-test. In the current research, professional teaching identity - incorporating self-efficacy, motivation, self-image, satisfaction, and commitment to teaching - was measured over six weeks at multiple time points: before, during, and after the 4-part scenario-based learning program. Analyses included latent growth curve modelling to assess the trajectory of change in the outcome variables throughout the intervention. The paper outlines (1) the theoretical underpinnings of SBL, (2) the development of the SBL program and methodology, and (3) the results from the study, including the impact of the SBL program on aspects of participating preservice teachers’ professional identity. The study shows how SBL interventions can be implemented alongside the initial teacher education curriculum to help prepare preservice teachers for the transition from student to teacher.

Keywords: classroom simulations, e-learning, initial teacher education, preservice teachers, professional learning, professional teaching identity, scenario-based learning, teacher development

Procedia PDF Downloads 71
49 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa

Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini

Abstract:

Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.

Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time

Procedia PDF Downloads 152
48 Disseminating Positive Psychology Resources Online: Current Research and Future Directions

Authors: Warren Jared, Bekker Jeremy, Salazar Guy, Jackman Katelyn, Linford Lauren

Abstract:

Introduction: Positive Psychology research has burgeoned in the past 20 years; however, relatively few evidence-based resources to cultivate positive psychology skills are widely available to the general public. The positive psychology resources at www.mybestself101.org were developed to assist individuals in cultivating well-being using a variety of techniques, including gratitude, purpose, mindfulness, self-compassion, savoring, personal growth, and supportive relationships. These resources are empirically based and are built to be accessible to a broad audience. Key Objectives: This presentation highlights results from two recent randomized intervention studies of specific MBS101 learning modules. A key objective of this research is to empirically assess the efficacy and usability of these online resources. Another objective of this research is to encourage the broad dissemination of online positive psychology resources; thus, recommendations for further research and dissemination will be discussed. Methods: In both interventions, we recruited adult participants using social media advertisements. The participants completed several well-being and positive psychology construct-specific measures (savoring and self-compassion measures) at baseline and post-intervention. Participants in the experimental condition were also given a feedback questionnaire to gather qualitative data on how participants viewed the modules. Participants in the self-compassion study were randomly split between an experimental group, who received the treatment, and a control group, who were placed on a waitlist. There was no control group for the savoring study. Participants were instructed to read content on the module and practice savoring or self-compassion strategies listed in the module for a minimum of twenty minutes a day for 21 days. The intervention was semi-structured, as participants were free to choose which module activities they would complete from a menu of research-based strategies. Participants tracked which activities they completed and how long they spent on the modules each day. Results: In the savoring study, participants increased in savoring ability as indicated by multiple measures. In addition, participants increased in well-being from pre- to post-treatment. In the self-compassion study, repeated measures mixed model analyses revealed that compared to waitlist controls, participants who used the MBS101 self-compassion module experienced significant improvements in self-compassion, well-being, and body image with effect sizes ranging from medium to large. Attrition was 10.5% for the self-compassion study and 71% for the savoring study. Overall, participants indicated that the modules were generally helpful, and they particularly appreciated the specific strategy menus. Participants requested more structured course activities, more interactive content, and more practice activities overall. Recommendations: Mybestself101.org is an applied positive psychology research program that shows promise as a model for effectively disseminating evidence-based positive psychology resources that are both engaging and easily accessible. Considerable research is still needed, both to test the efficacy and usability of the modules currently available and to improve them based on participant feedback. Feedback received from participants in the randomized controlled trial led to the development of an expanded, 30-day online course called The Gift of Self-Compassion and an online mindfulness course currently in development called Mindfulness For Humans.

Keywords: positive psychology, intervention, online resources, self-compassion, dissemination, online curriculum

Procedia PDF Downloads 204
47 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit

Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili

Abstract:

Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.

Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain

Procedia PDF Downloads 176
46 Applications of Polyvagal Theory for Trauma in Clinical Practice: Auricular Acupuncture and Herbology

Authors: Aurora Sheehy, Caitlin Prince

Abstract:

Within current orthodox medical protocols, trauma and mental health issues are deemed to reside within the realm of cognitive or psychological therapists and are marginalised in these areas, in part due to limited drugs option available, mostly manipulating neurotransmitters or sedating patients to reduce symptoms. By contrast, this research presents examples from the clinical practice of how trauma can be assessed and treated physiologically. Adverse Childhood Experiences (ACEs) are a tally of different types of abuse and neglect. It has been used as a measurable and reliable predictor of the likelihood of the development of autoimmune disease. It is a direct way to demonstrate reliably the health impact of traumatic life experiences. A second assessment tool is Allostatic Load, which refers to the cumulative effects that chronic stress has on mental and physical health. It records the decline of an individual’s physiological capacity to cope with their experience. It uses a specific grouping of serum testing and physical measures. It includes an assessment of neuroendocrine, cardiovascular, immune and metabolic systems. Allostatic load demonstrates the health impact that trauma has throughout the body. It forms part of an initial intake assessment in clinical practice and could also be used in research to evaluate treatment. Examining medicinal plants for their physiological, neurological and somatic effects through the lens of Polyvagal theory offers new opportunities for trauma treatments. In situations where Polyvagal theory recommends activities and exercises to enable parasympathetic activation, many herbs that affect Effector Memory T (TEM) cells also enact these responses. Traditional or Indigenous European herbs show the potential to support the polyvagal tone, through multiple mechanisms. As the ventral vagal nerve reaches almost every major organ, plants that have actions on these tissues can be understood via their polyvagal actions, such as monoterpenes as agents to improve respiratory vagal tone, cyanogenic glycosides to reset polyvagal tone, volatile oils rich in phenyl methyl esters improve both sympathetic and parasympathetic tone, bitters activate gut function and can strongly promote parasympathetic regulation. Auricular Acupuncture uses a system of somatotopic mapping of the auricular surface overlaid with an image of an inverted foetus with each body organ and system featured. Given that the concha of the auricle is the only place on the body where the Vagus Nerve neurons reach the surface of the skin, several investigators have evaluated non-invasive, transcutaneous electrical nerve stimulation (TENS) at auricular points. Drawn from an interdisciplinary evidence base and developed through clinical practice, these assessment and treatment tools are examples of practitioners in the field innovating out of necessity for the best outcomes for patients. This paper draws on case studies to direct future research.

Keywords: polyvagal, auricular acupuncture, trauma, herbs

Procedia PDF Downloads 92
45 The Effects of Labeling Cues on Sensory and Affective Responses of Consumers to Categories of Functional Food Carriers: A Mixed Factorial ANOVA Design

Authors: Hedia El Ourabi, Marc Alexandre Tomiuk, Ahmed Khalil Ben Ayed

Abstract:

The aim of this study is to investigate the effects of the labeling cues traceability (T), health claim (HC), and verification of health claim (VHC) on consumer affective response and sensory appeal toward a wide array of functional food carriers (FFC). Predominantly, research in the food area has tended to examine the effects of these information cues independently on cognitive responses to food product offerings. Investigations and findings of potential interaction effects among these factors on effective response and sensory appeal are therefore scant. Moreover, previous studies have typically emphasized single or limited sets of functional food products and categories. In turn, this study considers five food product categories enriched with omega-3 fatty acids, namely: meat products, eggs, cereal products, dairy products and processed fruits and vegetables. It is, therefore, exhaustive in scope rather than exclusive. An investigation of the potential simultaneous effects of these information cues on the affective responses and sensory appeal of consumers should give rise to important insights to both functional food manufacturers and policymakers. A mixed (2 x 3) x (2 x 5) between-within subjects factorial ANOVA design was implemented in this study. T (two levels: completely traceable or non-traceable) and HC (three levels: functional health claim, or disease risk reduction health claim, or disease prevention health claim) were treated as between-subjects factors whereas VHC (two levels: by a government agency and by a non-government agency) and FFC (five food categories) were modeled as within-subjects factors. Subjects were randomly assigned to one of the six between-subjects conditions. A total of 463 questionnaires were obtained from a convenience sample of undergraduate students at various universities in the Montreal and Ottawa areas (in Canada). Consumer affective response and sensory appeal were respectively measured via the following statements assessed on seven-point semantic differential scales: ‘Your evaluation of [food product category] enriched with omega-3 fatty acids is Unlikeable (1) / Likeable (7)’ and ‘Your evaluation of [food product category] enriched with omega-3 fatty acids is Unappetizing (1) / Appetizing (7).’ Results revealed a significant interaction effect between HC and VHC on consumer affective response as well as on sensory appeal toward foods enriched with omega-3 fatty acids. On the other hand, the three-way interaction effect between T, HC, and VHC on either of the two dependent variables was not significant. However, the triple interaction effect among T, VHC, and FFC was significant on consumer effective response and the interaction effect among T, HC, and FFC was significant on consumer sensory appeal. Findings of this study should serve as impetus for functional food manufacturers to closely cooperate with policymakers in order to improve on and legitimize the use of health claims in their marketing efforts through credible verification practices and protocols put in place by trusted government agencies. Finally, both functional food manufacturers and retailers may benefit from the socially-responsible image which is conveyed by product offerings whose ingredients remain traceable from farm to kitchen table.

Keywords: functional foods, labeling cues, effective appeal, sensory appeal

Procedia PDF Downloads 164
44 A Review of Data Visualization Best Practices: Lessons for Open Government Data Portals

Authors: Bahareh Ansari

Abstract:

Background: The Open Government Data (OGD) movement in the last decade has encouraged many government organizations around the world to make their data publicly available to advance democratic processes. But current open data platforms have not yet reached to their full potential in supporting all interested parties. To make the data useful and understandable for everyone, scholars suggested that opening the data should be supplemented by visualization. However, different visualizations of the same information can dramatically change an individual’s cognitive and emotional experience in working with the data. This study reviews the data visualization literature to create a list of the methods empirically tested to enhance users’ performance and experience in working with a visualization tool. This list can be used in evaluating the OGD visualization practices and informing the future open data initiatives. Methods: Previous reviews of visualization literature categorized the visualization outcomes into four categories including recall/memorability, insight/comprehension, engagement, and enjoyment. To identify the papers, a search for these outcomes was conducted in the abstract of the publications of top-tier visualization venues including IEEE Transactions for Visualization and Computer Graphics, Computer Graphics, and proceedings of the CHI Conference on Human Factors in Computing Systems. The search results are complemented with a search in the references of the identified articles, and a search for 'open data visualization,' and 'visualization evaluation' keywords in the IEEE explore and ACM digital libraries. Articles are included if they provide empirical evidence through conducting controlled user experiments, or provide a review of these empirical studies. The qualitative synthesis of the studies focuses on identification and classifying the methods, and the conditions under which they are examined to positively affect the visualization outcomes. Findings: The keyword search yields 760 studies, of which 30 are included after the title/abstract review. The classification of the included articles shows five distinct methods: interactive design, aesthetic (artistic) style, storytelling, decorative elements that do not provide extra information including text, image, and embellishment on the graphs), and animation. Studies on decorative elements show consistency on the positive effects of these elements on user engagement and recall but are less consistent in their examination of the user performance. This inconsistency could be attributable to the particular data type or specific design method used in each study. The interactive design studies are consistent in their findings of the positive effect on the outcomes. Storytelling studies show some inconsistencies regarding the design effect on user engagement, enjoyment, recall, and performance, which could be indicative of the specific conditions required for the use of this method. Last two methods, aesthetics and animation, have been less frequent in the included articles, and provide consistent positive results on some of the outcomes. Implications for e-government: Review of the visualization best-practice methods show that each of these methods is beneficial under specific conditions. By using these methods in a potentially beneficial condition, OGD practices can promote a wide range of individuals to involve and work with the government data and ultimately engage in government policy-making procedures.

Keywords: best practices, data visualization, literature review, open government data

Procedia PDF Downloads 106
43 The Sense of Recognition of Muslim Women in Western Academia

Authors: Naima Mohammadi

Abstract:

The present paper critically reports on the emergency of Iranian international students in a large public university in Italy. Although the most sizeable diaspora of Iranians dates back to the 1979 revolution, a huge wave of Iranian female students travelled abroad after the Iranian Green Movement (2009) due to the intensification of gender discrimination and Islamization. To explore the experience of Iranian female students at an Italian public university, two complementary methods were adopted: a focus group and individual interviews. Focus groups yield detailed collective conversations and provide researchers with an opportunity to observe the interaction between participants, rather than between participant and researcher, which generates data. Semi-structured interviews allow participants to share their stories in their own words and speak about personal experiences and opinions. Research participants were invited to participate through a public call in a Telegram group of Iranian students. Theoretical and purposive sampling was applied to select participants. All participants were assured that full anonymity would be ensured and they consented to take part in the research. A two-hour focus group was held in English with participants in the presence and some online. They were asked to share their motivations for studying in Italy and talk about their experiences both within and outside the university context. Each of these interviews lasted from 45 to 60 minutes and was mostly carried out online and in Farsi. The focus group consisted of 8 Iranian female post-graduate students. In analyzing the data a blended approach was adopted, with a combination of deductive and inductive coding. According to research findings, although 9/11 was the beginning of the West’s challenges against Muslims, the nuclear threats of Islamic regimes promoted the toughest international sanctions against Iranians as a nation across the world. Accordingly, carrying an Iranian identity contributes to social, political, and economic exclusion. Research findings show that geopolitical factors such as international sanctions and Islamophobia, and a lack of reciprocity in terms of recognition, have created a sense of stigmatization for veiled and unveiled Iranian female students who are the largest groups of ‘non-European Muslim international students’ enrolled in Italian universities. Participants addressed how their nationality has devalued their public image and negatively impacted their self-confidence and self-realization in academia. They highlighted the experience of an unwelcoming atmosphere by different groups of people and institutes, such as receiving marked students’ badges, rejected bank account requests, failed visa processes, secondary security screening selection, and hyper-visibility of veiled students. This study corroborates the need for institutions to pay attention to geopolitical factors and religious diversity in student recruitment and provide support mechanisms and access to basic rights. Accordingly, it is suggested that Higher Education Institutions (HEIs) have a social and moral responsibility towards the discrimination and both social and academic exclusion of Iranian students.

Keywords: Iranian diaspora, female students, recognition theory, inclusive university

Procedia PDF Downloads 73
42 A Digital Environment for Developing Mathematical Abilities in Children with Autism Spectrum Disorder

Authors: M. Isabel Santos, Ana Breda, Ana Margarida Almeida

Abstract:

Research on academic abilities of individuals with autism spectrum disorder (ASD) underlines the importance of mathematics interventions. Yet the proposal of digital applications for children and youth with ASD continues to attract little attention, namely, regarding the development of mathematical reasoning, being the use of the digital technologies an area of great interest for individuals with this disorder and its use is certainly a facilitative strategy in the development of their mathematical abilities. The use of digital technologies can be an effective way to create innovative learning opportunities to these students and to develop creative, personalized and constructive environments, where they can develop differentiated abilities. The children with ASD often respond well to learning activities involving information presented visually. In this context, we present the digital Learning Environment on Mathematics for Autistic children (LEMA) that was a research project conducive to a PhD in Multimedia in Education and was developed by the Thematic Line Geometrix, located in the Department of Mathematics, in a collaboration effort with DigiMedia Research Center, of the Department of Communication and Art (University of Aveiro, Portugal). LEMA is a digital mathematical learning environment which activities are dynamically adapted to the user’s profile, towards the development of mathematical abilities of children aged 6–12 years diagnosed with ASD. LEMA has already been evaluated with end-users (both students and teacher’s experts) and based on the analysis of the collected data readjustments were made, enabling the continuous improvement of the prototype, namely considering the integration of universal design for learning (UDL) approaches, which are of most importance in ASD, due to its heterogeneity. The learning strategies incorporated in LEMA are: (i) provide options to custom choice of math activities, according to user’s profile; (ii) integrates simple interfaces with few elements, presenting only the features and content needed for the ongoing task; (iii) uses a simple visual and textual language; (iv) uses of different types of feedbacks (auditory, visual, positive/negative reinforcement, hints with helpful instructions including math concept definitions, solved math activities using split and easier tasks and, finally, the use of videos/animations that show a solution to the proposed activity); (v) provides information in multiple representation, such as text, video, audio and image for better content and vocabulary understanding in order to stimulate, motivate and engage users to mathematical learning, also helping users to focus on content; (vi) avoids using elements that distract or interfere with focus and attention; (vii) provides clear instructions and orientation about tasks to ease the user understanding of the content and the content language, in order to stimulate, motivate and engage the user; and (viii) uses buttons, familiarly icons and contrast between font and background. Since these children may experience little sensory tolerance and may have an impaired motor skill, besides the user to have the possibility to interact with LEMA through the mouse (point and click with a single button), the user has the possibility to interact with LEMA through Kinect device (using simple gesture moves).

Keywords: autism spectrum disorder, digital technologies, inclusion, mathematical abilities, mathematical learning activities

Procedia PDF Downloads 116
41 Establishing Correlation between Urban Heat Island and Urban Greenery Distribution by Means of Remote Sensing and Statistics Data to Prioritize Revegetation in Yerevan

Authors: Linara Salikhova, Elmira Nizamova, Aleksandra Katasonova, Gleb Vitkov, Olga Sarapulova.

Abstract:

While most European cities conduct research on heat-related risks, there is a research gap in the Caucasus region, particularly in Yerevan, Armenia. This study aims to test the method of establishing a correlation between urban heat islands (UHI) and urban greenery distribution for prioritization of heat-vulnerable areas for revegetation. Armenia has failed to consider measures to mitigate UHI in urban development strategies despite a 2.1°C increase in average annual temperature over the past 32 years. However, planting vegetation in the city is commonly used to deal with air pollution and can be effective in reducing UHI if it prioritizes heat-vulnerable areas. The research focuses on establishing such priorities while considering the distribution of urban greenery across the city. The lack of spatially explicit air temperature data necessitated the use of satellite images to achieve the following objectives: (1) identification of land surface temperatures (LST) and quantification of temperature variations across districts; (2) classification of massifs of land surface types using normalized difference vegetation index (NDVI); (3) correlation of land surface classes with LST. Examination of the heat-vulnerable city areas (in this study, the proportion of individuals aged 75 years and above) is based on demographic data (Census 2011). Based on satellite images (Sentinel-2) captured on June 5, 2021, NDVI calculations were conducted. The massifs of the land surface were divided into five surface classes. Due to capacity limitations, the average LST for each district was identified using one satellite image from Landsat-8 on August 15, 2021. In this research, local relief is not considered, as the study mainly focuses on the interconnection between temperatures and green massifs. The average temperature in the city is 3.8°C higher than in the surrounding non-urban areas. The temperature excess ranges from a low in Norq Marash to a high in Nubarashen. Norq Marash and Avan have the highest tree and grass coverage proportions, with 56.2% and 54.5%, respectively. In other districts, the balance of wastelands and buildings is three times higher than the grass and trees, ranging from 49.8% in Quanaqer-Zeytun to 76.6% in Nubarashen. Studies have shown that decreased tree and grass coverage within a district correlates with a higher temperature increase. The temperature excess is highest in Erebuni, Ajapnyak, and Nubarashen districts. These districts have less than 25% of their area covered with grass and trees. On the other hand, Avan and Norq Marash districts have a lower temperature difference, as more than 50% of their areas are covered with trees and grass. According to the findings, a significant proportion of the elderly population (35%) aged 75 years and above reside in the Erebuni, Ajapnyak, and Shengavit neighborhoods, which are more susceptible to heat stress with an LST higher than in other city districts. The findings suggest that the method of comparing the distribution of green massifs and LST can contribute to the prioritization of heat-vulnerable city areas for revegetation. The method can become a rationale for the formation of an urban greening program.

Keywords: heat-vulnerability, land surface temperature, urban greenery, urban heat island, vegetation

Procedia PDF Downloads 72
40 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 148