Search results for: Complex fuzzy evolution equations
102 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation
Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy
Abstract:
The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis
Procedia PDF Downloads 406101 Antibacterial Nanofibrous Film Encapsulated with 4-terpineol/β-cyclodextrin Inclusion Complexes: Relative Humidity-Triggered Release and Shrimp Preservation Application
Authors: Chuanxiang Cheng, Tiantian Min, Jin Yue
Abstract:
Antimicrobial active packaging enables extensive biological effects to improve food safety. However, the efficacy of antimicrobial packaging hinges on factors including the diffusion rate of the active agent toward the food surface, the initial content in the antimicrobial agent, and the targeted food shelf life. Among the possibilities of antimicrobial packaging design, an interesting approach involves the incorporation of volatile antimicrobial agents into the packaging material. In this case, the necessity for direct contact between the active packaging material and the food surface is mitigated, as the antimicrobial agent exerts its action through the packaging headspace atmosphere towards the food surface. However, it still remains difficult to achieve controlled and precise release of bioactive compounds to the specific target location with required quantity in food packaging applications. Remarkably, the development of stimuli-responsive materials for electrospinning has introduced the possibility of achieving controlled release of active agents under specific conditions, thereby yielding enduring biological effects. Relative humidity (RH) for the storage of food categories such as meat and aquatic products typically exceeds 90%. Consequently, high RH can be used as an abiotic trigger for the release of active agents to prevent microbial growth. Hence, a novel RH - responsive polyvinyl alcohol/chitosan (PVA/CS) composite nanofibrous film incorporated with 4-terpineol/β-cyclodextrin inclusion complexes (4-TA@β-CD ICs) was engineered by electrospinning that can be deposited as a functional packaging materials. The characterization results showed the thermal stability of the films was enhanced after the incorporation due to the hydrogen bonds between ICs and polymers. Remarkably, the 4 wt% 4-TA@β-CD ICs/PVA/CS film exhibited enhanced crystallinity, moderate hydrophilic (Water contact angle of 81.53°), light barrier property (Transparency of 1.96%) and water resistance (Water vapor permeability of 3.17 g mm/m2 h kPa). Moreover, this film also showed optimized mechanical performance with a Young’s modulus of 11.33 MPa, a tensile strength of 19.99 MPa and an elongation at break of 4.44 %. Notably, the antioxidant and antibacterial properties of this packaging material were significantly improved. The film demonstrated the half-inhibitory concentrations (IC50) values of 87.74% and 85.11% for scavenging 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2, 2′-azinobis (3-ethylbenzothiazoline-6-sulfonic) (ABTS) free radicals, respectively, in addition to an inhibition efficiency of 65% against Shewanella putrefaciens, the characteristic bacteria in aquatic products. Most importantly, the film achieved controlled release of 4-TA under high 98% RH by inducing the plasticization of polymers caused by water molecules, swelling of polymer chains, and destruction of hydrogen bonds within the cyclodextrin inclusion complex. Consequently, low relative humidity is suitable for the preservation of nanofibrous film, while high humidity conditions typical in fresh food packaging environments effectively stimulated the release of active compounds in the film. This film with a long-term antimicrobial effect successfully extended the shelf life of Litopenaeus vannamei shrimp to 7 days at 4 °C. This attractive design could pave the way for the development of new food packaging materials.Keywords: controlled release, electrospinning, nanofibrous film, relative humidity–responsive, shrimp preservation
Procedia PDF Downloads 71100 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip
Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas
Abstract:
A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration
Procedia PDF Downloads 38899 i2kit: A Tool for Immutable Infrastructure Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.Keywords: container, deployment, immutable infrastructure, microservice
Procedia PDF Downloads 18098 Simulation and Analysis of Mems-Based Flexible Capacitive Pressure Sensors with COMSOL
Authors: Ding Liangxiao
Abstract:
The technological advancements in Micro-Electro-Mechanical Systems (MEMS) have significantly contributed to the development of new, flexible capacitive pressure sensors,which are pivotal in transforming wearable and medical device technologies. This study employs the sophisticated simulation tools available in COMSOL Multiphysics® to develop and analyze a MEMS-based sensor with a tri-layered design. This sensor comprises top and bottom electrodes made from gold (Au), noted for their excellent conductivity, a middle dielectric layer made from a composite of Silver Nanowires (AgNWs) embedded in Thermoplastic Polyurethane (TPU), and a flexible, durable substrate of Polydimethylsiloxane (PDMS). This research was directed towards understanding how changes in the physical characteristics of the AgNWs/TPU dielectric layer—specifically, its thickness and surface area—impact the sensor's operational efficacy. We assessed several key electrical properties: capacitance, electric potential, and membrane displacement under varied pressure conditions. These investigations are crucial for enhancing the sensor's sensitivity and ensuring its adaptability across diverse applications, including health monitoring systems and dynamic user interface technologies. To ensure the reliability of our simulations, we applied the Effective Medium Theory to calculate the dielectric constant of the AgNWs/TPU composite accurately. This approach is essential for predicting how the composite material will perform under different environmental and operational stresses, thus facilitating the optimization of the sensor design for enhanced performance and longevity. Moreover, we explored the potential benefits of innovative three-dimensional structures for the dielectric layer compared to traditional flat designs. Our hypothesis was that 3D configurations might improve the stress distribution and optimize the electrical field interactions within the sensor, thereby boosting its sensitivity and accuracy. Our simulation protocol includes comprehensive performance testing under simulated environmental conditions, such as temperature fluctuations and mechanical pressures, which mirror the actual operational conditions. These tests are crucial for assessing the sensor's robustness and its ability to function reliably over extended periods, ensuring high reliability and accuracy in complex real-world environments. In our current research, although a full dynamic simulation analysis of the three-dimensional structures has not yet been conducted, preliminary explorations through three-dimensional modeling have indicated the potential for mechanical and electrical performance improvements over traditional planar designs. These initial observations emphasize the potential advantages and importance of incorporating advanced three-dimensional modeling techniques in the development of Micro-Electro-Mechanical Systems (MEMS)sensors, offering new directions for the design and functional optimization of future sensors. Overall, this study not only highlights the powerful capabilities of COMSOL Multiphysics® for modeling sophisticated electronic devices but also underscores the potential of innovative MEMS technology in advancing the development of more effective, reliable, and adaptable sensor solutions for a broad spectrum of technological applications.Keywords: MEMS, flexible sensors, COMSOL Multiphysics, AgNWs/TPU, PDMS, 3D modeling, sensor durability
Procedia PDF Downloads 4797 Expression Profiling of Chlorophyll Biosynthesis Pathways in Chlorophyll B-Lacking Mutants of Rice (Oryza sativa L.)
Authors: Khiem M. Nguyen, Ming C. Yang
Abstract:
Chloroplast pigments are extremely important during photosynthesis since they play essential roles in light absorption and energy transfer. Therefore, understanding the efficiency of chlorophyll (Chl) biosynthesis could facilitate enhancement in photo-assimilates accumulation, and ultimately, in crop yield. The Chl-deficient mutants have been used extensively to study the Chl biosynthetic pathways and the biogenesis of the photosynthetic apparatus. Rice (Oryza sativa L.) is one of the most leading food crops, serving as staple food for many parts of the world. To author’s best knowledge, Chl b–lacking rice has been found; however the molecular mechanism of Chl biosynthesis still remains unclear compared to wild-type rice. In this study, the ultrastructure analysis, photosynthetic properties, and transcriptome profile of wild-type rice (Norin No.8, N8) and its Chl b-lacking mutant (Chlorina 1, C1) were examined. The finding concluded that total Chl content and Chl b content in the C1 leaves were strongly reduced compared to N8 leaves, suggesting that reduction in the total Chl content contributes to leaf color variation at the physiological level. Plastid ultrastructure of C1 possessed abnormal thylakoid membranes with loss of starch granule, large number of vesicles, and numerous plastoglobuli. The C1 rice also exhibited thinner stacked grana, which was caused by a reduction in the number of thylakoid membranes per granum. Thus, the different Chl a/b ratio of C1 may reflect the abnormal plastid development and function. Transcriptional analysis identified 23 differentially expressed genes (DEGs) and 671 transcription factors (TFs) that were involved in Chl metabolism, chloroplast development, cell division, and photosynthesis. The transcriptome profile and DEGs revealed that the gene encoding PsbR (PSII core protein) was down-regulated, therefore suggesting that the lower in light-harvesting complex proteins are responsible for the lower photosynthetic capacity in C1. In addition, expression level of cell division protein (FtsZ) genes were significantly reduced in C1, causing chloroplast division defect. A total of 19 DEGs were identified based on KEGG pathway assignment involving Chl biosynthesis pathway. Among these DEGs, the GluTR gene was down-regulated, whereas the UROD, CPOX, and MgCH genes were up-regulated. Observation through qPCR suggested that later stages of Chl biosynthesis were enhanced in C1, whereas the early stages were inhibited. Plastid structure analysis together with transcriptomic analysis suggested that the Chl a/b ratio was amplified both by the reduction in Chl contents accumulation, owning to abnormal chloroplast development, and by the enhanced conversion of Chl b to Chl a. Moreover, the results indicated the same Chl-cycle pattern in the wild-type and C1 rice, indicating another Chl b degradation pathway. Furthermore, the results demonstrated that normal grana stacking, along with the absence of Chl b and greatly reduced levels of Chl a in C1, provide evidence to support the conclusion that other factors along with LHCII proteins are involved in grana stacking. The findings of this study provide insight into the molecular mechanisms that underlie different Chl a/b ratios in rice.Keywords: Chl-deficient mutant, grana stacked, photosynthesis, RNA-Seq, transcriptomic analysis
Procedia PDF Downloads 12596 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus
Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert
Abstract:
Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.Keywords: building information modeling, digital terrain model, existing buildings, interoperability
Procedia PDF Downloads 11495 Poverty Reduction in European Cities: Local Governments’ Strategies and Programmes to Reduce Poverty; Interview Results from Austria
Authors: Melanie Schinnerl, Dorothea Greiling
Abstract:
In the context of the 2020 strategy, poverty and its fight returned to the center of national political efforts. This served as motivation for an Austrian research grant-funded project to focus on the under-researched local government level with the aim to identify municipal best-practice cases and to derive policy implications for Austria. Designing effective poverty reduction strategies is a complex challenge which calls for an integrated multi-actor in approach. Cities are increasingly confronted to combat poverty, even in rich EU-member states. By doing so cities face substantial demographic, cultural, economic and social challenges as well as changing welfare state regimes. Furthermore, there is a low willingness of (right-wing) governments to support the poor. Against this background, the research questions are: 1. How do local governments define poverty? 2. Who are the main risk groups and what are the most pressing problems when fighting urban poverty? 3. What is regarded as successful anti-poverty initiatives? 4. What is the underlying welfare state concept? To address the research questions a multi-method approach was chosen, consisting of a systematic literature analysis, a comprehensive document analysis, and expert interviews. For interpreting the data the project follows the qualitative-interpretive paradigm. Municipal approaches for reducing poverty are compared based on deductive, as well as inductive identified criteria. In addition to an intensive literature analysis, interviews (40) were conducted in Austria since the project started in March 2018. From the other countries, 14 responses have been collected, providing a first insight. Regarding the definition of poverty the EU SILC-definition as well as counting the persons who receive need-based minimum social benefits, the Austrian form of social welfare, are the predominant approaches in Austria. In addition to homeless people, single-parent families, un-skilled persons, long-term unemployed persons, migrants (first and second generation), refugees and families with at least 3 children were frequently mentioned. The most pressing challenges for Austrian cities are: expected reductions of social budgets, a great insecurity of the central government's social policy reform plans, the growing number of homeless people and a lack of affordable housing. Together with affordable housing, old-age poverty will gain more importance in the future. The Austrian best practice examples, suggested by interviewees, focused primarily on homeless, children and young people (till 25). Central government’s policy changes have already negative effects on programs for refugees and elderly unemployed. Social Housing in Vienna was frequently mentioned as an international best practice case, other growing cities can learn from. The results from Austria indicate a change towards the social investment state, which primarily focuses on children and labour market integration. The first insights from the other countries indicate that affordable housing and labor market integration are cross-cutting issues. Inherited poverty and old-age poverty seems to be more pressing outside Austria.Keywords: anti-poverty policies, European cities, empirical study, social investment
Procedia PDF Downloads 11894 Distributed Listening in Intensive Care: Nurses’ Collective Alarm Responses Unravelled through Auditory Spatiotemporal Trajectories
Authors: Michael Sonne Kristensen, Frank Loesche, James Foster, Elif Ozcan, Judy Edworthy
Abstract:
Auditory alarms play an integral role in intensive care nurses’ daily work. Most medical devices in the intensive care unit (ICU) are designed to produce alarm sounds in order to make nurses aware of immediate or prospective safety risks. The utilisation of sound as a carrier of crucial patient information is highly dependent on nurses’ presence - both physically and mentally. For ICU nurses, especially the ones who work with stationary alarm devices at the patient bed space, it is a challenge to display ‘appropriate’ alarm responses at all times as they have to navigate with great flexibility in a complex work environment. While being primarily responsible for a small number of allocated patients they are often required to engage with other nurses’ patients, relatives, and colleagues at different locations inside and outside the unit. This work explores the social strategies used by a team of nurses to comprehend and react to the information conveyed by the alarms in the ICU. Two main research questions guide the study: To what extent do alarms from a patient bed space reach the relevant responsible nurse by direct auditory exposure? By which means do responsible nurses get informed about their patients’ alarms when not directly exposed to the alarms? A comprehensive video-ethnographic field study was carried out to capture and evaluate alarm-related events in an ICU. The study involved close collaboration with four nurses who wore eye-level cameras and ear-level binaural audio recorders during several work shifts. At all time the entire unit was monitored by multiple video and audio recorders. From a data set of hundreds of hours of recorded material information about the nurses’ location, social interaction, and alarm exposure at any point in time was coded in a multi-channel replay-interface. The data shows that responsible nurses’ direct exposure and awareness of the alarms of their allocated patients vary significantly depending on work load, social relationships, and the location of the patient’s bed space. Distributed listening is deliberately employed by the nursing team as a social strategy to respond adequately to alarms, but the patterns of information flow prompted by alarm-related events are not uniform. Auditory Spatiotemporal Trajectory (AST) is proposed as a methodological label to designate the integration of temporal, spatial and auditory load information. As a mixed-method metrics it provides tangible evidence of how nurses’ individual alarm-related experiences differ from one another and from stationary points in the ICU. Furthermore, it is used to demonstrate how alarm-related information reaches the individual nurse through principles of social and distributed cognition, and how that information relates to the actual alarm event. Thereby it bridges a long-standing gap in the literature on medical alarm utilisation between, on the one hand, initiatives to measure objective data of the medical sound environment without consideration for any human experience, and, on the other hand, initiatives to study subjective experiences of the medical sound environment without detailed evidence of the objective characteristics of the environment.Keywords: auditory spatiotemporal trajectory, medical alarms, social cognition, video-ethography
Procedia PDF Downloads 19193 Gis Based Flash Flood Runoff Simulation Model of Upper Teesta River Besin - Using Aster Dem and Meteorological Data
Authors: Abhisek Chakrabarty, Subhraprakash Mandal
Abstract:
Flash flood is one of the catastrophic natural hazards in the mountainous region of India. The recent flood in the Mandakini River in Kedarnath (14-17th June, 2013) is a classic example of flash floods that devastated Uttarakhand by killing thousands of people.The disaster was an integrated effect of high intensityrainfall, sudden breach of Chorabari Lake and very steep topography. Every year in Himalayan Region flash flood occur due to intense rainfall over a short period of time, cloud burst, glacial lake outburst and collapse of artificial check dam that cause high flow of river water. In Sikkim-Derjeeling Himalaya one of the probable flash flood occurrence zone is Teesta Watershed. The Teesta River is a right tributary of the Brahmaputra with draining mountain area of approximately 8600 Sq. km. It originates in the Pauhunri massif (7127 m). The total length of the mountain section of the river amounts to 182 km. The Teesta is characterized by a complex hydrological regime. The river is fed not only by precipitation, but also by melting glaciers and snow as well as groundwater. The present study describes an attempt to model surface runoff in upper Teesta basin, which is directly related to catastrophic flood events, by creating a system based on GIS technology. The main object was to construct a direct unit hydrograph for an excess rainfall by estimating the stream flow response at the outlet of a watershed. Specifically, the methodology was based on the creation of a spatial database in GIS environment and on data editing. Moreover, rainfall time-series data collected from Indian Meteorological Department and they were processed in order to calculate flow time and the runoff volume. Apart from the meteorological data, background data such as topography, drainage network, land cover and geological data were also collected. Clipping the watershed from the entire area and the streamline generation for Teesta watershed were done and cross-sectional profiles plotted across the river at various locations from Aster DEM data using the ERDAS IMAGINE 9.0 and Arc GIS 10.0 software. The analysis of different hydraulic model to detect flash flood probability ware done using HEC-RAS, Flow-2D, HEC-HMS Software, which were of great importance in order to achieve the final result. With an input rainfall intensity above 400 mm per day for three days the flood runoff simulation models shows outbursts of lakes and check dam individually or in combination with run-off causing severe damage to the downstream settlements. Model output shows that 313 Sq. km area were found to be most vulnerable to flash flood includes Melli, Jourthang, Chungthang, and Lachung and 655sq. km. as moderately vulnerable includes Rangpo,Yathang, Dambung,Bardang, Singtam, Teesta Bazarand Thangu Valley. The model was validated by inserting the rain fall data of a flood event took place in August 1968, and 78% of the actual area flooded reflected in the output of the model. Lastly preventive and curative measures were suggested to reduce the losses by probable flash flood event.Keywords: flash flood, GIS, runoff, simulation model, Teesta river basin
Procedia PDF Downloads 31892 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects
Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm
Abstract:
Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology
Procedia PDF Downloads 18191 Recent Findings of Late Bronze Age Mining and Archaeometallurgy Activities in the Mountain Region of Colchis (Southern Lechkhumi, Georgia)
Authors: Rusudan Chagelishvili, Nino Sulava, Tamar Beridze, Nana Rezesidze, Nikoloz Tatuashvili
Abstract:
The South Caucasus is one of the most important centers of prehistoric metallurgy, known for its Colchian bronze culture. Modern Lechkhumi – historical Mountainous Colchis where the existence of prehistoric metallurgy is confirmed by the discovery of many artifacts is a part of this area. Studies focused on prehistoric smelting sites, related artefacts, and ore deposits have been conducted during last ten years in Lechkhumi. More than 20 prehistoric smelting sites and artefacts associated with metallurgical activities (ore roasting furnaces, slags, crucible, and tuyères fragments) have been identified so far. Within the framework of integrated studies was established that these sites were operating in 13-9 centuries B.C. and used for copper smelting. Palynological studies of slags revealed that chestnut (Castanea sativa) and hornbeam (Carpinus sp.) wood were used as smelting fuel. Geological exploration-analytical studies revealed that copper ore mining, processing, and smelting sites were distributed close to each other. Despite recent complex data, the signs of prehistoric mines (trenches) haven’t been found in this part of the study area so far. Since 2018 the archaeological-geological exploration has been focused on the southern part of Lechkhumi and covered the areas of villages Okureshi and Opitara. Several copper smelting sites (Okureshi 1 and 2, Opitara 1), as well as a Colchian Bronze culture settlement, have been identified here. Three mine workings have been found in the narrow gorge of the river Rtkhmelebisgele in the vicinities of the village Opitara. In order to establish a link between the Opitara-Okureshi archaeometallurgical sites, Late Bronze Age settlements, and mines, various scientific analytical methods -mineralized rock and slags petrography and atomic absorption spectrophotometry (AAS) analysis have been applied. The careful examination of Opitara mine workings revealed that there is a striking difference between the mine #1 on the right bank of the river and mines #2 and #3 on the left bank. The first one has all characteristic features of the Soviet period mine working (e. g. high portal with angular ribs and roof showing signs of blasting). In contrast, mines #2 and #3, which are located very close to each other, have round-shaped portals/entrances, low roofs, and fairly smooth ribs and are filled with thick layers of river sediments and collapsed weathered rock mass. A thorough review of the publications related to prehistoric mine workings revealed some striking similarities between mines #2 and #3 with their worldwide analogues. Apparently, the ore extraction from these mines was conducted by fire-setting applying primitive tools. It was also established that mines are cut in Jurassic mineralized volcanic rocks. Ore minerals (chalcopyrite, pyrite, galena) are related to calcite and quartz veins. The results obtained through the petrochemical and petrography studies of mineralized rock samples from Opitara mines and prehistoric slags are in complete correlation with each other, establishing the direct link between copper mining and smelting within the study area. Acknowledgment: This work was supported by the Shota Rustaveli National Science Foundation of Georgia (grant # FR-19-13022).Keywords: archaeometallurgy, Mountainous Colchis, mining, ore minerals
Procedia PDF Downloads 18190 Ethnic Andean Concepts of Health and Illness in the Post-Colombian World and Its Relevance Today
Authors: Elizabeth J. Currie, Fernando Ortega Perez
Abstract:
—‘MEDICINE’ is a new project funded under the EC Horizon 2020 Marie-Sklodowska Curie Actions, to determine concepts of health and healing from a culturally specific indigenous context, using a framework of interdisciplinary methods which integrates archaeological-historical, ethnographic and modern health sciences approaches. The study will generate new theoretical and methodological approaches to model how peoples survive and adapt their traditional belief systems in a context of alien cultural impacts. In the immediate wake of the conquest of Peru by invading Spanish armies and ideology, native Andeans responded by forming the Taki Onkoy millenarian movement, which rejected European philosophical and ontological teachings, claiming “you make us sick”. The study explores how people’s experience of their world and their health beliefs within it, is fundamentally shaped by their inherent beliefs about the nature of being and identity in relation to the wider cosmos. Cultural and health belief systems and related rituals or behaviors sustain a people’s sense of identity, wellbeing and integrity. In the event of dislocation and persecution these may change into devolved forms, which eventually inter-relate with ‘modern’ biomedical systems of health in as yet unidentified ways. The development of new conceptual frameworks that model this process will greatly expand our understanding of how people survive and adapt in response to cultural trauma. It will also demonstrate the continuing role, relevance and use of TM in present-day indigenous communities. Studies will first be made of relevant pre-Colombian material culture, and then of early colonial period ethnohistorical texts which document the health beliefs and ritual practices still employed by indigenous Andean societies at the advent of the 17th century Jesuit campaigns of persecution - ‘Extirpación de las Idolatrías’. Core beliefs drawn from these baseline studies will then be used to construct a questionnaire about current health beliefs and practices to be taken into the study population of indigenous Quechua peoples in the northern Andean region of Ecuador. Their current systems of knowledge and medicine have evolved within complex historical contexts of both the conquest by invading Inca armies in the late 15th century, followed a generation later by Spain, into new forms. A new model will be developed of contemporary Andean concepts of health, illness and healing demonstrating the way these have changed through time. With this, a ‘policy tool’ will be constructed as a bridhging facility into contemporary global scenarios relevant to other Indigenous, First Nations, and migrant peoples to provide a means through which their traditional health beliefs and current needs may be more appropriately understood and met. This paper presents findings from the first analytical phases of the work based upon the study of the literature and the archaeological records. The study offers a novel perspective and methods in the development policies sensitive to indigenous and minority people’s health needs.Keywords: Andean ethnomedicine, Andean health beliefs, health beliefs models, traditional medicine
Procedia PDF Downloads 34889 Recovery in Serious Mental Illness: Perception of Health Care Trainees in Morocco
Authors: Sophia El Ouazzani, Amer M. Burhan, Mary Wickenden
Abstract:
Background: Despite improvements in recent years, the Moroccan mental healthcare system still face disparity between available resources and the current population’sneeds. The societal stigma, and limited economic, political, and human resources are all factors in shaping the psychiatric system, exacerbating the discontinuity of services for users after discharged from the hospital. As a result, limited opportunities for social inclusion and meaningful community engagement undermines human rights and recovery potential for people with mental health problems, especially those with psychiatric disabilities from serious mental illness (SMI). Recovery-oriented practice, such as mental health rehabilitation, addresses the complex needs of patients with SMI and support their community inclusion. The cultural acceptability of recovery-oriented practice is an important notion to consider for a successful implementation. Exploring the extent to which recovery-oriented practices are used in Morocco is a necessary first step to assess the cultural relevance of such a practice model. Aims: This study aims to explore understanding and knowledge, perception, and perspective about core concepts in mental health rehabilitation, including psychiatric disability, recovery, and engagement in meaningful occupations for people with SMI in Morocco. Methods: A pilot qualitative study was undertaken. Data was collected via semi-structured interviews and focusgroup discussions with healthcare professional students. Questions were organised around the following themes: 1) students’ perceptions, understanding, and expectations around concepts such as SMI, mental health disability, and recovery, and 2) changes in their views and expectations after starting their professional training. Further analysis of students’ perspectives on the concept of ‘meaningful occupation’ and how is this viewed within the context of the research questions was done. The data was extracted using an inductive thematic analysis approach. This is a pilot stage of a doctoral project, further data will be collected and analysed until saturation is reached. Results: A total of eight students were included in this study which included occupational therapy and mental health nursing students receiving training in Morocco. The following themes emerged as influencing students’ perceptions and views around the main concepts: 1) Stigma and discrimination, 2) Fatalism and low expectations, 3) Gendered perceptions, 4) Religious causation, 5) Family involvement, 6) Professional background, 7) Inaccessibility of services and treatment. Discussion/Contribution: Preliminary analysis of the data suggests that students’ perceptions changed after gaining more clinical experiences and being exposed to people with psychiatric disabilities. Prior to their training, stigma shaped greatly how they viewed people with SMI. The fear, misunderstanding, and shame around SMI and their functional capacities may contribute to people with SMI being stigmatizedand marginalised from their family and their community. Religious causations associated to SMIsare understood as further deepening the social stigma around psychiatric disability. Perceptions are influenced by gender, with women being doubly discriminated against in relation to recovery opportunities. Therapeutic pessimism seems to persist amongst students and within the mental healthcare system in general and regarding the recovery potential and opportunities for people with SMI. The limited resources, fatalism, and stigma all contribute to the low expectations for recovery and community inclusion. Implications and future directions will be discussed.Keywords: disability, mental health rehabilitation, recovery, serious mental illness, transcultural psychiatry
Procedia PDF Downloads 14588 Challenges to Developing a Trans-European Programme for Health Professionals to Recognize and Respond to Survivors of Domestic Violence and Abuse
Authors: June Keeling, Christina Athanasiades, Vaiva Hendrixson, Delyth Wyndham
Abstract:
Recognition and education in violence, abuse, and neglect for medical and healthcare practitioners (REVAMP) is a trans-European project aiming to introduce a training programme that has been specifically developed by partners across seven European countries to meet the needs of medical and healthcare practitioners. Amalgamating the knowledge and experience of clinicians, researchers, and educators from interdisciplinary and multi-professional backgrounds, REVAMP has tackled the under-resourced and underdeveloped area of domestic violence and abuse. The team designed an online training programme to support medical and healthcare practitioners to recognise and respond appropriately to survivors of domestic violence and abuse at their point of contact with a health provider. The REVAMP partner countries include Europe: France, Lithuania, Germany, Greece, Iceland, Norway, and the UK. The training is delivered through a series of interactive online modules, adapting evidence-based pedagogical approaches to learning. Capturing and addressing the complexities of the project impacted the methodological decisions and approaches to evaluation. The challenge was to find an evaluation methodology that captured valid data across all partner languages to demonstrate the extent of the change in knowledge and understanding. Co-development by all team members was a lengthy iterative process, challenged by a lack of consistency in terminology. A mixed methods approach enabled both qualitative and quantitative data to be collected, at the start, during, and at the conclusion of the training for the purposes of evaluation. The module content and evaluation instrument were accessible in each partner country's language. Collecting both types of data provided a high-level snapshot of attainment via the quantitative dataset and an in-depth understanding of the impact of the training from the qualitative dataset. The analysis was mixed methods, with integration at multiple interfaces. The primary focus of the analysis was to support the overall project evaluation for the funding agency. A key project outcome was identifying that the trans-European approach posed several challenges. Firstly, the project partners did not share a first language or a legal or professional approach to domestic abuse and neglect. This was negotiated through complex, systematic, and iterative interaction between team members so that consensus could be achieved. Secondly, the context of the data collection in several different cultural, educational, and healthcare systems across Europe challenged the development of a robust evaluation. The participants in the pilot evaluation shared that the training was contemporary, well-designed, and of great relevance to inform practice. Initial results from the evaluation indicated that the participants were drawn from more than eight partner countries due to the online nature of the training. The primary results indicated a high level of engagement with the content and achievement through the online assessment. The main finding was that the participants perceived the impact of domestic abuse and neglect in very different ways in their individual professional contexts. Most significantly, the participants recognised the need for the training and the gap that existed previously. It is notable that a mixed-methods evaluation of a trans-European project is unusual at this scale.Keywords: domestic violence, e-learning, health professionals, trans-European
Procedia PDF Downloads 8587 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition
Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan
Abstract:
Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models
Procedia PDF Downloads 34286 How the Writer Tells the Story Should Be the Primary Concern rather than Who Can Write about Whom: The Limits of Cultural Appropriation Vis-à-Vis The Ethics of Narrative Empathy
Authors: Alexandra Cheira
Abstract:
Cultural appropriation has been theorised as a form of colonialism in which members of a dominant culture reduce cultural elements that are deeply meaningful to a minority culture to the category of the “exotic other” since they do not experience the oppression and discriminations faced by members of the minority culture. Yet, in the particular case of literature, writers such as Lionel Shriver and Bernardine Evaristo have argued that authors from a cultural majority have a right to write in the voice of someone from a cultural minority, hence attacking the idea that this is a form of cultural appropriation. By definition, Shriver and Evaristo claim, writers are supposed to write beyond their own culture, gender, class, and/ or race. In this light, this paper discusses the limits of cultural appropriation vis-à-vis the ethics of narrative empathy by addressing the mixed critical reception of Kathryn Stockett’s The Help (2009) and Jeanine Cummins’s American Dirt (2020). In fact, both novels were acclaimed as global eye-openers regarding the struggles of respectively South American migrants and African American maids. At the same time, both novelists have been accused of cultural appropriation by telling a story that is not theirs to tell, given the fact that they are white women telling these stories in what critics have argued is really an American voice telling a story to American readers.These claims will be investigated within the framework of Edward Said’s foundational examination of Orientalism in the field of postcolonial studies as a Western style for authoritatively restructuring the Orient. This means that Orientalist stereotypes regarding Eastern cultures have implicitly validated colonial and imperial pursuits, in the specific context of literary representations of African American and Mexican cultures by white writers. At the same time, the conflicted reception of American Dirt and The Help will be examined within the critical framework of narrative empathy as theorised by Suzanne Keen. Hence, there will be a particular focus on the way a reader’s heated perception that the author’s perspective is purely dishonest can result from a friction between an author’s intention and a reader’s experience of narrative empathy, while a shared sense of empathy between authors and readers can be a rousing momentum to move beyond literary response to social action.Finally, in order to assess that “the key question should not be who can write about whom, but how the writer tells the story”, the recent controversy surrounding Dutch author Marieke Lucas Rijneveld’s decision to resign the translation of American poet Amanda Gorman’s work into Dutch will be duly investigated. In fact, Rijneveld stepped out after journalist and activist Janice Deul criticised Dutch publisher Meulenhoff for choosing a translator who was not also Black, despite the fact that 22-year-old Gorman had selected the 29-year-old Rijneveld herself, as a fellow young writer who had likewise come to fame early on in life. In this light, the critical argument that the controversial reception of The Help reveals as much about US race relations in the early twenty-first century as about the complex literary transactions between individual readers and the novel itself will also be discussed in the extended context of American Dirt and white author Marieke Rijneveld’s withdrawal from the projected translation of Black poet Amanda Gorman.Keywords: cultural appropriation, cultural stereotypes, narrative empathy, race relations
Procedia PDF Downloads 7085 Prevalence and Diagnostic Evaluation of Schistosomiasis in School-Going Children in Nelson Mandela Bay Municipality: Insights from Urinalysis and Point-of-Care Testing
Authors: Maryline Vere, Wilma ten Ham-Baloyi, Lucy Ochola, Opeoluwa Oyedele, Lindsey Beyleveld, Siphokazi Tili, Takafira Mduluza, Paula E. Melariri
Abstract:
Schistosomiasis, caused by Schistosoma (S.) haematobium and Schistosoma (S.) mansoni parasites poses a significant public health challenge in low-income regions. Diagnosis typically relies on identifying specific urine biomarkers such as haematuria, protein, and leukocytes for S. haematobium, while the Point-of-Care Circulating Cathodic Antigen (POC-CCA) assay is employed for detecting S. mansoni. Urinalysis and the POC-CCA assay are favoured for their rapid, non-invasive nature and cost-effectiveness. However, traditional diagnostic methods such as Kato-Katz and urine filtration lack sensitivity in low-transmission areas, which can lead to underreporting of cases and hinder effective disease control efforts. Therefore, in this study, urinalysis and the POC-CCA assay was utilised to diagnose schistosomiasis effectively among school-going children in Nelson Mandela Bay Municipality. This was a cross-sectional study with a total of 759 children, aged 5 to 14 years, who provided urine samples. Urinalysis was performed using urinary dipstick tests, which measure multiple parameters, including haematuria, protein, leukocytes, bilirubin, urobilinogen, ketones, pH, specific gravity and other biomarkers. Urinalysis was performed by dipping the strip into the urine sample and observing colour changes on specific reagent pads. The POC-CCA test was conducted by applying a drop of urine onto a cassette containing CCA-specific antibodies, and the presence of a visible test line indicated a positive result for S. mansoni infection. Descriptive statistics were used to summarize urine parameters, and Pearson correlation coefficients (r) were calculated to analyze associations among urine parameters using R software (version 4.3.1). Among the 759 children, the prevalence of S. haematobium using haematuria as a diagnostic marker was 33.6%. Additionally, leukocytes were detected in 21.3% of the samples, and protein was present in 15%. The prevalence of positive POC-CCA test results for S. mansoni was 3.7%. Urine parameters exhibited low to moderate associations, suggesting complex interrelationships. For instance, specific gravity and pH showed a negative correlation (r = -0.37), indicating that higher specific gravity was associated with lower pH. Weak correlations were observed between haematuria and pH (r = -0.10), bilirubin and ketones (r = 0.14), protein and bilirubin (r = 0.13), and urobilinogen and pH (r = 0.12). A mild positive correlation was found between leukocytes and blood (r = 0.23), reflecting some association between these inflammation markers. In conclusion, the study identified a significant prevalence of schistosomiasis among school-going children in Nelson Mandela Bay Municipality, with S. haematobium detected through haematuria and S. mansoni identified using the POC-CCA assay. The detection of leukocytes and protein in urine samples serves as critical biomarkers for schistosomiasis infections, reinforcing the presence of schistosomiasis in the study area when considered alongside haematuria. These urine parameters are indicative of inflammatory responses associated with schistosomiasis, underscoring the necessity for effective diagnostic methodologies. Such findings highlight the importance of comprehensive diagnostic assessments to accurately identify and monitor schistosomiasis prevalence and its associated health impacts. The significant burden of schistosomiasis in this population highlights the urgent need to develop targeted control interventions to effectively reduce its prevalence in the study area.Keywords: schistosomiasis, urinalysis, haematuria, POC-CCA
Procedia PDF Downloads 2384 Examining Three Psychosocial Factors of Tax Compliance in Self-Employed Individuals using the Mindspace Framework - Evidence from Australia and Pakistan
Authors: Amna Tariq Shah
Abstract:
Amid the pandemic, the contemporary landscape has experienced accelerated growth in small business activities and an expanding digital marketplace, further exacerbating the issue of non-compliance among self-employed individuals through aggressive tax planning and evasion. This research seeks to address these challenges by developing strategic tax policies that promote voluntary compliance and improve taxpayer facilitation. The study employs the innovative MINDSPACE framework to examine three psychosocial factors—tax communication, tax literacy, and shaming—to optimize policy responses, address administrative shortcomings, and ensure adequate revenue collection for public goods and services. Preliminary findings suggest that incomprehensible communication from tax authorities drives individuals to seek alternative, potentially biased sources of tax information, thereby exacerbating non-compliance. Furthermore, the study reveals low tax literacy among Australian and Pakistani respondents, with many struggling to navigate complex tax processes and comprehend tax laws. Consequently, policy recommendations include simplifying tax return filing and enhancing pre-populated tax returns. In terms of shaming, the research indicates that Australians, being an individualistic society, may not respond well to shaming techniques due to privacy concerns. In contrast, Pakistanis, as a collectivistic society, may be more receptive to naming and shaming approaches. The study employs a mixed-method approach, utilizing interviews and surveys to analyze the issue in both jurisdictions. The use of mixed methods allows for a more comprehensive understanding of tax compliance behavior, combining the depth of qualitative insights with the generalizability of quantitative data, ultimately leading to more robust and well-informed policy recommendations. By examining evidence from opposite jurisdictions, namely a developed country (Australia) and a developing country (Pakistan), the study's applicability is enhanced, providing perspectives from two disparate contexts that offer insights from opposite ends of the economic, cultural, and social spectra. The non-comparative case study methodology offers valuable insights into human behavior, which can be applied to other jurisdictions as well. The application of the MINDSPACE framework in this research is particularly significant, as it introduces a novel approach to tax compliance behavior analysis. By integrating insights from behavioral economics, the framework enables a comprehensive understanding of the psychological and social factors influencing taxpayer decision-making, facilitating the development of targeted and effective policy interventions. This research carries substantial importance as it addresses critical challenges in tax compliance and administration, with far-reaching implications for revenue collection and the provision of public goods and services. By investigating the psychosocial factors that influence taxpayer behavior and utilizing the MINDSPACE framework, the study contributes invaluable insights to the field of tax policy. These insights can inform policymakers and tax administrators in developing more effective tax policies that enhance taxpayer facilitation, address administrative obstacles, promote a more equitable and efficient tax system, and foster voluntary compliance, ultimately strengthening the financial foundation of governments and communities.Keywords: individual tax compliance behavior, psychosocial factors, tax non-compliance, tax policy
Procedia PDF Downloads 7783 Investigation of the Controversial Immunomodulatory Potential of Trichinella spiralis Excretory-Secretory Products versus Extracellular Vesicles Derived from These Products in vitro
Authors: Natasa Ilic, Alisa Gruden-Movsesijan, Maja Kosanovic, Sofija Glamoclija, Marina Bekic, Ljiljana Sofronic-Milosavljevic, Sergej Tomic
Abstract:
As a very promising candidate for modulation of immune response in the sense of biasing the inflammatory towards an anti-inflammatory type of response, Trichinella spiralis infection was shown to successfully alleviate the severity of experimental autoimmune encephalomyelitis, the animal model of human disease multiple sclerosis. This effect is achieved via its excretory-secretory muscle larvae (ES L1) products which affect the maturation status and function of dendritic cells (DCs) by inducing the tolerogenic status of DCs, which leads to the mitigation of the Th1 type of response and the activation of a regulatory type of immune response both in vitro and in vivo. ES L1 alone or via treated DCs successfully mitigated EAE in the same manner as the infection itself. On the other hand, it has been shown that T. spiralis infection slows down the tumour growth and significantly reduces the tumour size in the model of mouse melanoma, while ES L1 possesses a pro-apoptotic and anti-survival effect on melanoma cells in vitro. Hence, although the mechanisms still need to be revealed, T. spiralis infection and its ES L1 products have a bit of controversial potential to modulate both inflammatory diseases and malignancies. The recent discovery of T. spiralis extracellular vesicles (TsEVs) suggested that the induction of complex regulation of the immune response requires simultaneous delivery of different signals in nano-sized packages. This study aimed to explore whether TsEVs bare the similar potential as ES L1 to influence the status of DCs in initiation, progression and regulation of immune response, but also to investigate the effect of both ES L1 and TsEVs on myeloid derived suppressor cells (MDSC) which present the regular tumour tissue environment. TsEVs were enriched from the conditioned medium of T. spiralis muscle larvae by differential centrifugation and used for the treatment of human monocyte-derived DCs and MDSC. On DCs, TsEVs induced low expression of HLA DR and CD40, moderate CD83 and CD86, and increased expression of ILT3 and CCR7 on treated DCs, i.e., they induced tolerogenic DCs. Such DCs possess the capacity to polarize T cell immune response towards regulatory type, with an increased proportion of IL-10 and TGF-β producing cells, similarly to ES L1. These findings indicated that the ability of TsEVs to induce tolerogenic DCs favoring anti-inflammatory responses may be helpful in coping with diseases that involve Th1/Th17-, but also Th2-mediated inflammation. In MDSC in vitro model, although both ES L1 and TsEVs had the same impact on MDSC phenotype i.e., they acted suppressive, ES L1 treated MDSC, unlike TsEVs treated ones, induced T cell response characterized by the increased RoRγT and IFN-γ, while the proportion of regulatory cells was decreased followed by the decrease in IL-10 and TGF-β positive cells proportion within this population. These findings indicate the interesting ability of ES L1 to modulate T cells response via MDSC towards pro-inflamatory type, suggesting that, unlike TsEVs which consistently demonstrate the suppresive effect on inflammatory response, it could be used also for the development of new approaches aimed for the treatment of malignant diseases. Acknowledgment: This work was funded by the Promis project – Nano-MDCS-Thera, Science Fund, Republic of Serbia.Keywords: dendritic cells, myeloid derived suppressor cells, immunomodulation, Trichinella spiralis
Procedia PDF Downloads 20482 Impact of Increased Radiology Staffing on After-Hours Radiology Reporting Efficiency and Quality
Authors: Peregrine James Dalziel, Philip Vu Tran
Abstract:
Objective / Introduction: Demand for radiology services from Emergency Departments (ED) continues to increase with greater demands placed on radiology staff providing reports for the management of complex cases. Queuing theory indicates that wide variability of process time with the random nature of request arrival increases the probability of significant queues. This can lead to delays in the time-to-availability of radiology reports (TTA-RR) and potentially impaired ED patient flow. In addition, greater “cognitive workload” of greater volume may lead to reduced productivity and increased errors. We sought to quantify the potential ED flow improvements obtainable from increased radiology providers serving 3 public hospitals in Melbourne Australia. We sought to assess the potential productivity gains, quality improvement and the cost-effectiveness of increased labor inputs. Methods & Materials: The Western Health Medical Imaging Department moved from single resident coverage on weekend days 8:30 am-10:30 pm to a limited period of 2 resident coverage 1 pm-6 pm on both weekend days. The TTA-RR for weekend CT scans was calculated from the PACs database for the 8 month period symmetrically around the date of staffing change. A multivariate linear regression model was developed to isolate the improvement in TTA-RR, between the two 4-months periods. Daily and hourly scan volume at the time of each CT scan was calculated to assess the impact of varying department workload. To assess any improvement in report quality/errors a random sample of 200 studies was assessed to compare the average number of clinically significant over-read addendums to reports between the 2 periods. Cost-effectiveness was assessed by comparing the marginal cost of additional staffing against a conservative estimate of the economic benefit of improved ED patient throughput using the Australian national insurance rebate for private ED attendance as a revenue proxy. Results: The primary resident on call and the type of scan accounted for most of the explained variability in time to report availability (R2=0.29). Increasing daily volume and hourly volume was associated with increased TTA-RR (1.5m (p<0.01) and 4.8m (p<0.01) respectively per additional scan ordered within each time frame. Reports were available 25.9 minutes sooner on average in the 4 months post-implementation of double coverage (p<0.01) with additional 23.6 minutes improvement when 2 residents were on-site concomitantly (p<0.01). The aggregate average improvement in TTA-RR was 24.8 hours per weekend day This represents the increased decision-making time available to ED physicians and potential improvement in ED bed utilisation. 5% of reports from the intervention period contained clinically significant addendums vs 7% in the single resident period but this was not statistically significant (p=0.7). The marginal cost was less than the anticipated economic benefit based assuming a 50% capture of improved TTA-RR inpatient disposition and using the lowest available national insurance rebate as a proxy for economic benefit. Conclusion: TTA-RR improved significantly during the period of increased staff availability, both during the specific period of increased staffing and throughout the day. Increased labor utilisation is cost-effective compared with the potential improved productivity for ED cases requiring CT imaging.Keywords: workflow, quality, administration, CT, staffing
Procedia PDF Downloads 11381 Geovisualization of Human Mobility Patterns in Los Angeles Using Twitter Data
Authors: Linna Li
Abstract:
The capability to move around places is doubtless very important for individuals to maintain good health and social functions. People’s activities in space and time have long been a research topic in behavioral and socio-economic studies, particularly focusing on the highly dynamic urban environment. By analyzing groups of people who share similar activity patterns, many socio-economic and socio-demographic problems and their relationships with individual behavior preferences can be revealed. Los Angeles, known for its large population, ethnic diversity, cultural mixing, and entertainment industry, faces great transportation challenges such as traffic congestion, parking difficulties, and long commuting. Understanding people’s travel behavior and movement patterns in this metropolis sheds light on potential solutions to complex problems regarding urban mobility. This project visualizes people’s trajectories in Greater Los Angeles (L.A.) Area over a period of two months using Twitter data. A Python script was used to collect georeferenced tweets within the Greater L.A. Area including Ventura, San Bernardino, Riverside, Los Angeles, and Orange counties. Information associated with tweets includes text, time, location, and user ID. Information associated with users includes name, the number of followers, etc. Both aggregated and individual activity patterns are demonstrated using various geovisualization techniques. Locations of individual Twitter users were aggregated to create a surface of activity hot spots at different time instants using kernel density estimation, which shows the dynamic flow of people’s movement throughout the metropolis in a twenty-four-hour cycle. In the 3D geovisualization interface, the z-axis indicates time that covers 24 hours, and the x-y plane shows the geographic space of the city. Any two points on the z axis can be selected for displaying activity density surface within a particular time period. In addition, daily trajectories of Twitter users were created using space-time paths that show the continuous movement of individuals throughout the day. When a personal trajectory is overlaid on top of ancillary layers including land use and road networks in 3D visualization, the vivid representation of a realistic view of the urban environment boosts situational awareness of the map reader. A comparison of the same individual’s paths on different days shows some regular patterns on weekdays for some Twitter users, but for some other users, their daily trajectories are more irregular and sporadic. This research makes contributions in two major areas: geovisualization of spatial footprints to understand travel behavior using the big data approach and dynamic representation of activity space in the Greater Los Angeles Area. Unlike traditional travel surveys, social media (e.g., Twitter) provides an inexpensive way of data collection on spatio-temporal footprints. The visualization techniques used in this project are also valuable for analyzing other spatio-temporal data in the exploratory stage, thus leading to informed decisions about generating and testing hypotheses for further investigation. The next step of this research is to separate users into different groups based on gender/ethnic origin and compare their daily trajectory patterns.Keywords: geovisualization, human mobility pattern, Los Angeles, social media
Procedia PDF Downloads 12180 La0.80Ag0.15MnO3 Magnetic Nanoparticles for Self-Controlled Magnetic Fluid Hyperthermia
Authors: Marian Mihalik, Kornel Csach, Martin Kovalik, Matúš Mihalik, Martina Kubovčíková, Maria Zentková, Martin Vavra, Vladimír Girman, Jaroslav Briančin, Marija Perovic, Marija Boškovic, Magdalena Fitta, Robert Pelka
Abstract:
Current nanomaterials for use in biomedicine are based mainly on iron oxides and on present knowledge on magnetic nanostructures. Manganites can represent another material which can be used optionally. Manganites and their unique electronic properties have been extensively studied in the last decades not only due to fundamental interest but to possible applications of colossal magnetoresistance, magnetocaloric effect, and ferroelectric properties. It was found that the oxygen-reduction reaction on perovskite oxide is intimately connected with metal ion e.g., orbital occupation. The effect of oxygen deviation from the stoichiometric composition on crystal structure was studied very carefully by many authors on LaMnO₃. Depending on oxygen content, the crystal structure changes from orthorhombic one to rhombohedric for oxygen content 3.1. In the case of hole-doped manganites, the change from the orthorhombic crystal structure, which is typical for La1-xCaxMnO3 based manganites, to the rhombohedric crystal structure (La1-xMxMnO₃ where M = K, Ag, and Sr based materials) results in an enormous increase of the Curie temperature. In our paper, we study the effect of oxygen content on crystal structure, thermal, and magnetic properties (including magnetocaloric effect) of La1-xAgxMnO₃nano particle system. The content of oxygen in samples was tuned by heat treatment in different thermal regimes and in various environment (air, oxygen, argon). Water nanosuspensions based on La0.80Ag0.15MnO₃ magnetic particles with the Curie temperature of about 43oC were prepared by two different approaches. First, by using a laboratory circulation mill for milling of powder in the presence of sodium dodecyl sulphate (SDS) and subsequent centrifugation. Second nanosuspension was prepared using an agate bowl, etching in citric acid and HNO3, ultrasound homogeniser, centrifugation, and dextran 40 kDA or 15 kDA as surfactant. Electrostatic stabilisation obtained by the first approach did not offer long term kinetic and aggregation colloidal stability and was unable to compensate for attractive forces between particles under a magnetic field. By the second approach, we prepared suspension oversaturated by dextran 40 kDA for steric stabilisation, with evidence of the presence of superparamagnetic behaviour. Low concentration of nanoparticles and not ideal coverage of nanoparticles impacting the stability of ferrofluids was the disadvantage of this approach. Strong steric stabilisation was observable at alcaic conditions under pH = ~10. Application of dextran 15 kDA leads to relatively stable ferrofluid with pH around physiological conditions, but desegregation of powder by HNO₃ was not effective enough, and the average size of fragments was to large of about 150 nm, and we did not see any signature of superparamagnetic behaviour. The prepared ferrofluids were characterised by scanning and transition microscope method, thermogravimetry, magnetization, and AC susceptibility measurements. Specific Absorption Rate measurements were undertaken on powder as well on ferrofluids in order to estimate the potential application of La₀.₈₀Ag₀.₁₅MnO₃ magnetic particles based ferrofluid for hyperthermia. Our complex study contains an investigation of biocompatibility and potential biohazard of this material.Keywords: manganites, magnetic nanoparticles, oxygen content, magnetic phase transition, magnetocaloric effect, ferrofluid, hyperthermia
Procedia PDF Downloads 9179 A Computational Investigation of Potential Drugs for Cholesterol Regulation to Treat Alzheimer’s Disease
Authors: Marina Passero, Tianhua Zhai, Zuyi (Jacky) Huang
Abstract:
Alzheimer’s disease has become a major public health issue, as indicated by the increasing populations of Americans living with Alzheimer’s disease. After decades of extensive research in Alzheimer’s disease, only seven drugs have been approved by Food and Drug Administration (FDA) to treat Alzheimer’s disease. Five of these drugs were designed to treat the dementia symptoms, and only two drugs (i.e., Aducanumab and Lecanemab) target the progression of Alzheimer’s disease, especially the accumulation of amyloid-b plaques. However, controversial comments were raised for the accelerated approvals of either Aducanumab or Lecanemab, especially with concerns on safety and side effects of these two drugs. There is still an urgent need for further drug discovery to target the biological processes involved in the progression of Alzheimer’s disease. Excessive cholesterol has been found to accumulate in the brain of those with Alzheimer’s disease. Cholesterol can be synthesized in both the blood and the brain, but the majority of biosynthesis in the adult brain takes place in astrocytes and is then transported to the neurons via ApoE. The blood brain barrier separates cholesterol metabolism in the brain from the rest of the body. Various proteins contribute to the metabolism of cholesterol in the brain, which offer potential targets for Alzheimer’s treatment. In the astrocytes, SREBP cleavage-activating protein (SCAP) binds to Sterol Regulatory Element-binding Protein 2 (SREBP2) in order to transport the complex from the endoplasmic reticulum to the Golgi apparatus. Cholesterol is secreted out of the astrocytes by ATP-Binding Cassette A1 (ABCA1) transporter. Lipoprotein receptors such as triggering receptor expressed on myeloid cells 2 (TREM2) internalize cholesterol into the microglia, while lipoprotein receptors such as Low-density lipoprotein receptor-related protein 1 (LRP1) internalize cholesterol into the neuron. Cytochrome P450 Family 46 Subfamily A Member 1 (CYP46A1) converts excess cholesterol to 24S-hydroxycholesterol (24S-OHC). Cholesterol has been approved for its direct effect on the production of amyloid-beta and tau proteins. The addition of cholesterol to the brain promotes the activity of beta-site amyloid precursor protein cleaving enzyme 1 (BACE1), secretase, and amyloid precursor protein (APP), which all aid in amyloid-beta production. The reduction of cholesterol esters in the brain have been found to reduce phosphorylated tau levels in mice. In this work, a computational pipeline was developed to identify the protein targets involved in cholesterol regulation in brain and further to identify chemical compounds as the inhibitors of a selected protein target. Since extensive evidence shows the strong correlation between brain cholesterol regulation and Alzheimer’s disease, a detailed literature review on genes or pathways related to the brain cholesterol synthesis and regulation was first conducted in this work. An interaction network was then built for those genes so that the top gene targets were identified. The involvement of these genes in Alzheimer’s disease progression was discussed, which was followed by the investigation of existing clinical trials for those targets. A ligand-protein docking program was finally developed to screen 1.5 million chemical compounds for the selected protein target. A machine learning program was developed to evaluate and predict the binding interaction between chemical compounds and the protein target. The results from this work pave the way for further drug discovery to regulate brain cholesterol to combat Alzheimer’s disease.Keywords: Alzheimer’s disease, drug discovery, ligand-protein docking, gene-network analysis, cholesterol regulation
Procedia PDF Downloads 7678 Culture and Health Equity: Unpacking the Sociocultural Determinants of Eye Health for Indigenous Australian Diabetics
Authors: Aryati Yashadhana, Ted Fields Jnr., Wendy Fernando, Kelvin Brown, Godfrey Blitner, Francis Hayes, Ruby Stanley, Brian Donnelly, Bridgette Jerrard, Anthea Burnett, Anthony B. Zwi
Abstract:
Indigenous Australians experience some of the worst health outcomes globally, with life expectancy being significantly poorer than those of non-Indigenous Australians. This is largely attributed to preventable diseases such as diabetes (prevalence 39% in Indigenous Australian adults > 55 years), which is attributed to a raised risk of diabetic visual impairment and cataract among Indigenous adults. Our study aims to explore the interface between structural and sociocultural determinants and human agency, in order to understand how they impact (1) accessibility of eye health and chronic disease services and (2) the potential for Indigenous patients to achieve positive clinical eye health outcomes. We used Participatory Action Research methods, and aimed to privilege the voices of Indigenous people through community collaboration. Semi-structured interviews (n=82) and patient focus groups (n=8) were conducted by Indigenous Community-Based Researchers (CBRs) with diabetic Indigenous adults (> 40 years) in four remote communities in Australia. Interviews (n=25) and focus groups (n=4) with primary health care clinicians in each community were also conducted. Data were audio recorded, transcribed verbatim, and analysed thematically using grounded theory, comparative analysis and Nvivo 10. Preliminary analysis occurred in tandem with data collection to determine theoretical saturation. The principal investigator (AY) led analysis sessions with CBRs, fostering cultural and contextual appropriateness to interpreting responses, knowledge exchange and capacity building. Identified themes were conceptualised into three spheres of influence: structural (health services, government), sociocultural (Indigenous cultural values, distrust of the health system, ongoing effects of colonialism and dispossession) and individual (health beliefs/perceptions, patient phenomenology). Permeating these spheres of influence were three core determinants: economic disadvantage, health literacy/education, and cultural marginalisation. These core determinants affected accessibility of services, and the potential for patients to achieve positive clinical outcomes at every level of care (primary, secondary, tertiary). Our findings highlight the clinical realities of institutionalised and structural inequities, illustrated through the lived experiences of Indigenous patients and primary care clinicians in the four sampled communities. The complex determinants surrounding inequity in health for Indigenous Australians, are entrenched through a longstanding experience of cultural discrimination and ostracism. Secure and long term funding of Aboriginal Community Controlled Health Services will be valuable, but are insufficient to address issues of inequity. Rather, working collaboratively with communities to build trust, and identify needs and solutions at the grassroots level, while leveraging community voices to drive change at the systemic/policy level are recommended.Keywords: indigenous, Australia, culture, public health, eye health, diabetes, social determinants of health, sociology, anthropology, health equity, aboriginal and Torres strait islander, primary care
Procedia PDF Downloads 30377 Integrating Evidence Into Health Policy: Navigating Cross-Sector and Interdisciplinary Collaboration
Authors: Tessa Heeren
Abstract:
The following proposal pertains to the complex process of successfully implementing health policies that are based on public health research. A systematic review was conducted by myself and faculty at the Cluj School of Public Health in Romania. The reviewed articles covered a wide range of topics, such as barriers and facilitators to multi-sector collaboration, differences in professional cultures, and systemic obstacles. The reviewed literature identified communication, collaboration, user-friendly dissemination, and documentation of processes in the execution of applied research as important themes for the promotion of evidence in the public health decision-making process. This proposal fits into the Academy Health National Health Policy conference because it identifies and examines differences between the worlds of research and politics. Implications and new insights for federal and/or state health policy: Recommendations made based on the findings of this research include using politically relevant levers to promote research (e.g. campaign donors, lobbies, established parties, etc.), modernizing dissemination practices, and reforms in which the involvement of external stakeholders is facilitated without relying on invitations from individual policy makers. Description of how evidence and/or data was or could be used: The reviewed articles illustrated shortcomings and areas for improvement in policy research processes and collaborative development. In general, the evidence base in the field of integrating research into policy lacks critical details of the actual process of developing evidence based policy. This shortcoming in logistical details creates a barrier for potential replication of collaborative efforts described in studies. Potential impact of the presentation for health policy: The reviewed articles focused on identifying barriers and facilitators that arise in cross sector collaboration, rather than the process and impact of integrating evidence into policy. In addition, the type of evidence used in policy was rarely specified, and widely varying interpretations of the definition of evidence complicated overall conclusions. Background: Using evidence to inform public health decision making processes has been proven effective; however, it is not clear how research is applied in practice. Aims: The objectives of the current study were to assess the extent to which evidence is used in public health decision-making process. Methods: To identify eligible studies, seven bibliographic databases, specifically, PubMed, Scopus, Cochrane Library, Science Direct, Web of Science, ClinicalKey, Health and Safety Science Abstract were screened (search dates: 1990 – September 2015); a general internet search was also conducted. Primary research and systematic reviews about the use of evidence in public health policy in Europe were included. The studies considered for inclusion were assessed by two reviewers, along with extracted data on objective, methods, population, and results. Data were synthetized as a narrative review. Results: Of 2564 articles initially identified, 2525 titles and abstracts were screened. Ultimately, 30 articles fit the research criteria by describing how or why evidence is used/not used in public health policy. The majority of included studies involved interviews and surveys (N=17). Study participants were policy makers, health care professionals, researchers, community members, service users, experts in public health.Keywords: cross-sector, dissemination, health policy, policy implementation
Procedia PDF Downloads 22676 Agenesis of the Corpus Callosum: The Role of Neuropsychological Assessment with Implications to Psychosocial Rehabilitation
Authors: Ron Dick, P. S. D. V. Prasadarao, Glenn Coltman
Abstract:
Agenesis of the corpus callosum (ACC) is a failure to develop corpus callosum - the large bundle of fibers of the brain that connects the two cerebral hemispheres. It can occur as a partial or complete absence of the corpus callosum. In the general population, its estimated prevalence rate is 1 in 4000 and a wide range of genetic, infectious, vascular, and toxic causes have been attributed to this heterogeneous condition. The diagnosis of ACC is often achieved by neuroimaging procedures. Though persons with ACC can perform normally on intelligence tests they generally present with a range of neuropsychological and social deficits. The deficit profile is characterized by poor coordination of motor movements, slow reaction time, processing speed and, poor memory. Socially, they present with deficits in communication, language processing, the theory of mind, and interpersonal relationships. The present paper illustrates the role of neuropsychological assessment with implications to psychosocial management in a case of agenesis of the corpus callosum. Method: A 27-year old left handed Caucasian male with a history of ACC was self-referred for a neuropsychological assessment to assist him in his employment options. Parents noted significant difficulties with coordination and balance at an early age of 2-3 years and he was diagnosed with dyspraxia at the age of 14 years. History also indicated visual impairment, hypotonia, poor muscle coordination, and delayed development of motor milestones. MRI scan indicated agenesis of the corpus callosum with ventricular morphology, widely spaced parallel lateral ventricles and mild dilatation of the posterior horns; it also showed colpocephaly—a disproportionate enlargement of the occipital horns of the lateral ventricles which might be affecting his motor abilities and visual defects. The MRI scan ruled out other structural abnormalities or neonatal brain injury. At the time of assessment, the subject presented with such problems as poor coordination, slowed processing speed, poor organizational skills and time management, and difficulty with social cues and facial expressions. A comprehensive neuropsychological assessment was planned and conducted to assist in identifying the current neuropsychological profile to facilitate the formulation of a psychosocial and occupational rehabilitation programme. Results: General intellectual functioning was within the average range and his performance on memory-related tasks was adequate. Significant visuospatial and visuoconstructional deficits were evident across tests; constructional difficulties were seen in tasks such as copying a complex figure, building a tower and manipulating blocks. Poor visual scanning ability and visual motor speed were evident. Socially, the subject reported heightened social anxiety, difficulty in responding to cues in the social environment, and difficulty in developing intimate relationships. Conclusion: Persons with ACC are known to present with specific cognitive deficits and problems in social situations. Findings from the current neuropsychological assessment indicated significant visuospatial difficulties, poor visual scanning and problems in social interactions. His general intellectual functioning was within the average range. Based on the findings from the comprehensive neuropsychological assessment, a structured psychosocial rehabilitation programme was developed and recommended.Keywords: agenesis, callosum, corpus, neuropsychology, psychosocial, rehabilitation
Procedia PDF Downloads 27675 Effect of a Chatbot-Assisted Adoption of Self-Regulated Spaced Practice on Students' Vocabulary Acquisition and Cognitive Load
Authors: Ngoc-Nguyen Nguyen, Hsiu-Ling Chen, Thanh-Truc Lai Huynh
Abstract:
In foreign language learning, vocabulary acquisition has consistently posed challenges to learners, especially for those at lower levels. Conventional approaches often fail to promote vocabulary learning and ensure engaging experiences alike. The emergence of mobile learning, particularly the integration of chatbot systems, has offered alternative ways to facilitate this practice. Chatbots have proven effective in educational contexts by offering interactive learning experiences in a constructivist manner. These tools have caught attention in the field of mobile-assisted language learning (MALL) in recent years. This research is conducted in an English for Specific Purposes (ESP) course at the A2 level of the CEFR, designed for non-English majors. Participants are first-year Vietnamese students aged 18 to 20 at a university. This quasi-experimental study follows a pretest-posttest control group design over five weeks, with two classes randomly assigned as the experimental and control groups. The experimental group engages in chatbot-assisted spaced practice with SRL components, while the control group uses the same spaced practice without SRL. The two classes are taught by the same lecturer. Data are collected through pre- and post-tests, cognitive load surveys, and semi-structured interviews. The combination of self-regulated learning (SRL) and distributed practice, grounded in the spacing effect, forms the basis of the present study. SRL elements, which concern goal setting and strategy planning, are integrated into the system. The spaced practice method, similar to those used in widely recognized learning platforms like Duolingo and Anki flashcards, spreads out learning over multiple sessions. This study’s design features quizzes progressively increasing in difficulty. These quizzes are aimed at targeting both the Recognition-Recall and Comprehension-Use dimensions for a comprehensive acquisition of vocabulary. The mobile-based chatbot system is built using Golang, an open-source programming language developed by Google. It follows a structured flow that guides learners through a series of 4 quizzes in each week of teacher-led learning. The quizzes start with less cognitively demanding tasks, such as multiple-choice questions, before moving on to more complex exercises. The integration of SRL elements allows students to self-evaluate the difficulty level of vocabulary items, predict scores achieved, and choose appropriate strategy. This research is part one of a two-part project. The initial findings will determine the development of an upgraded chatbot system in part two, where adaptive features in response to the integration of SRL components will be introduced. The research objectives are to assess the effectiveness of the chatbot-assisted approach, based on the combination of spaced practice and SRL, in improving vocabulary acquisition and managing cognitive load, as well as to understand students' perceptions of this learning tool. The insights from this study will contribute to the growing body of research on mobile-assisted language learning and offer practical implications for integrating chatbot systems with spaced practice into educational settings to enhance vocabulary learning.Keywords: mobile learning, mobile-assisted language learning, MALL, chatbots, vocabulary learning, spaced practice, spacing effect, self-regulated learning, SRL, self-regulation, EFL, cognitive load
Procedia PDF Downloads 2274 Thermally Conductive Polymer Nanocomposites Based on Graphene-Related Materials
Authors: Alberto Fina, Samuele Colonna, Maria del Mar Bernal, Orietta Monticelli, Mauro Tortello, Renato Gonnelli, Julio Gomez, Chiara Novara, Guido Saracco
Abstract:
Thermally conductive polymer nanocomposites are of high interest for several applications including low-temperature heat recovery, heat exchangers in a corrosive environment and heat management in electronics and flexible electronics. In this paper, the preparation of thermally conductive nanocomposites exploiting graphene-related materials is addressed, along with their thermal characterization. In particular, correlations between 1- chemical and physical features of the nanoflakes and 2- processing conditions with the heat conduction properties of nanocomposites is studied. Polymers are heat insulators; therefore, the inclusion of conductive particles is the typical solution to obtain a sufficient thermal conductivity. In addition to traditional microparticles such as graphite and ceramics, several nanoparticles have been proposed, including carbon nanotubes and graphene, for the use in polymer nanocomposites. Indeed, thermal conductivities for both carbon nanotubes and graphenes were reported in the wide range of about 1500 to 6000 W/mK, despite such property may decrease dramatically as a function of the size, number of layers, the density of topological defects, re-hybridization defects as well as on the presence of impurities. Different synthetic techniques have been developed, including mechanical cleavage of graphite, epitaxial growth on SiC, chemical vapor deposition, and liquid phase exfoliation. However, the industrial scale-up of graphene, defined as an individual, single-atom-thick sheet of hexagonally arranged sp2-bonded carbons still remains very challenging. For large scale bulk applications in polymer nanocomposites, some graphene-related materials such as multilayer graphenes (MLG), reduced graphene oxide (rGO) or graphite nanoplatelets (GNP) are currently the most interesting graphene-based materials. In this paper, different types of graphene-related materials were characterized for their chemical/physical as well as for thermal properties of individual flakes. Two selected rGOs were annealed at 1700°C in vacuum for 1 h to reduce defectiveness of the carbon structure. Thermal conductivity increase of individual GNP with annealing was assessed via scanning thermal microscopy. Graphene nano papers were prepared from both conventional RGO and annealed RGO flakes. Characterization of the nanopapers evidenced a five-fold increase in the thermal diffusivity on the nano paper plane for annealed nanoflakes, compared to pristine ones, demonstrating the importance of structural defectiveness reduction to maximize the heat dissipation performance. Both pristine and annealed RGO were used to prepare polymer nanocomposites, by melt reactive extrusion. Thermal conductivity showed two- to three-fold increase in the thermal conductivity of the nanocomposite was observed for high temperature treated RGO compared to untreated RGO, evidencing the importance of using low defectivity nanoflakes. Furthermore, the study of different processing paremeters (time, temperature, shear rate) during the preparation of poly (butylene terephthalate) nanocomposites evidenced a clear correlation with the dispersion and fragmentation of the GNP nanoflakes; which in turn affected the thermal conductivity performance. Thermal conductivity of about 1.7 W/mK, i.e. one order of magnitude higher than for pristine polymer, was obtained with 10%wt of annealed GNPs, which is in line with state of the art nanocomposites prepared by more complex and less upscalable in situ polymerization processes.Keywords: graphene, graphene-related materials, scanning thermal microscopy, thermally conductive polymer nanocomposites
Procedia PDF Downloads 26873 Information Pollution: Exploratory Analysis of Subs-Saharan African Media’s Capabilities to Combat Misinformation and Disinformation
Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi
Abstract:
The role of information in societal development and growth cannot be over-emphasized. It has remained an age-long strategy to adopt the information flow to make an egalitarian society. The same has become a tool for throwing society into chaos and anarchy. It has been adopted as a weapon of war and a veritable instrument of psychological warfare with a variety of uses. That is why some scholars posit that information could be deployed as a weapon to wreak “Mass Destruction" or promote “Mass Development". When used as a tool for destruction, the effect on society is like an atomic bomb which when it is released, pollutes the air and suffocates the people. Technological advancement has further exposed the latent power of information and many societies seem to be overwhelmed by its negative effect. While information remains one of the bedrock of democracy, the information ecosystem across the world is currently facing a more difficult battle than ever before due to information pluralism and technological advancement. The more the agents involved try to combat its menace, the difficult and complex it is proving to be curbed. In a region like Africa with dangling democracy enfolds with complexities of multi-religion, multi-cultures, inter-tribes, ongoing issues that are yet to be resolved, it is important to pay critical attention to the case of information disorder and find appropriate ways to curb or mitigate its effects. The media, being the middleman in the distribution of information, needs to build capacities and capabilities to separate the whiff of misinformation and disinformation from the grains of truthful data. From quasi-statistical senses, it has been observed that the efforts aimed at fighting information pollution have not considered the built resilience of media organisations against this disorder. Apparently, the efforts, resources and technologies adopted for the conception, production and spread of information pollution are much more sophisticated than approaches to suppress and even reduce its effects on society. Thus, this study seeks to interrogate the phenomenon of information pollution and the capabilities of select media organisations in Sub-Saharan Africa. In doing this, the following questions are probed; what are the media actions to curb the menace of information pollution? Which of these actions are working and how effective are they? And which of the actions are not working and why they are not working? Adopting quantitative and qualitative approaches and anchored on the Dynamic Capability Theory, the study aims at digging up insights to further understand the complexities of information pollution, media capabilities and strategic resources for managing misinformation and disinformation in the region. The quantitative approach involves surveys and the use of questionnaires to get data from journalists on their understanding of misinformation/disinformation and their capabilities to gate-keep. Case Analysis of select media and content analysis of their strategic resources to manage misinformation and disinformation is adopted in the study while the qualitative approach will involve an In-depth Interview to have a more robust analysis is also considered. The study is critical in the fight against information pollution for a number of reasons. One, it is a novel attempt to document the level of media capabilities to fight the phenomenon of information disorder. Two, the study will enable the region to have a clear understanding of the capabilities of existing media organizations to combat misinformation and disinformation in the countries that make up the region. Recommendations emanating from the study could be used to initiate, intensify or review existing approaches to combat the menace of information pollution in the region.Keywords: disinformation, information pollution, misinformation, media capabilities, sub-Saharan Africa
Procedia PDF Downloads 162