Search results for: tangible user interface
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3558

Search results for: tangible user interface

138 Rapid, Direct, Real-Time Method for Bacteria Detection on Surfaces

Authors: Evgenia Iakovleva, Juha Koivisto, Pasi Karppinen, J. Inkinen, Mikko Alava

Abstract:

Preventing the spread of infectious diseases throughout the worldwide is one of the most important tasks of modern health care. Infectious diseases not only account for one fifth of the deaths in the world, but also cause many pathological complications for the human health. Touch surfaces pose an important vector for the spread of infections by varying microorganisms, including antimicrobial resistant organisms. Further, antimicrobial resistance is reply of bacteria to the overused or inappropriate used of antibiotics everywhere. The biggest challenges in bacterial detection by existing methods are non-direct determination, long time of analysis, the sample preparation, use of chemicals and expensive equipment, and availability of qualified specialists. Therefore, a high-performance, rapid, real-time detection is demanded in rapid practical bacterial detection and to control the epidemiological hazard. Among the known methods for determining bacteria on the surfaces, Hyperspectral methods can be used as direct and rapid methods for microorganism detection on different kind of surfaces based on fluorescence without sampling, sample preparation and chemicals. The aim of this study was to assess the relevance of such systems to remote sensing of surfaces for microorganisms detection to prevent a global spread of infectious diseases. Bacillus subtilis and Escherichia coli with different concentrations (from 0 to 10x8 cell/100µL) were detected with hyperspectral camera using different filters as visible visualization of bacteria and background spots on the steel plate. A method of internal standards was applied for monitoring the correctness of the analysis results. Distances from sample to hyperspectral camera and light source are 25 cm and 40 cm, respectively. Each sample is optically imaged from the surface by hyperspectral imaging system, utilizing a JAI CM-140GE-UV camera. Light source is BeamZ FLATPAR DMX Tri-light, 3W tri-colour LEDs (red, blue and green). Light colors are changed through DMX USB Pro interface. The developed system was calibrated following a standard procedure of setting exposure and focused for light with λ=525 nm. The filter is ThorLabs KuriousTM hyperspectral filter controller with wavelengths from 420 to 720 nm. All data collection, pro-processing and multivariate analysis was performed using LabVIEW and Python software. The studied human eye visible and invisible bacterial stains clustered apart from a reference steel material by clustering analysis using different light sources and filter wavelengths. The calculation of random and systematic errors of the analysis results proved the applicability of the method in real conditions. Validation experiments have been carried out with photometry and ATP swab-test. The lower detection limit of developed method is several orders of magnitude lower than for both validation methods. All parameters of the experiments were the same, except for the light. Hyperspectral imaging method allows to separate not only bacteria and surfaces, but also different types of bacteria, such as Gram-negative Escherichia coli and Gram-positive Bacillus subtilis. Developed method allows skipping the sample preparation and the use of chemicals, unlike all other microbiological methods. The time of analysis with novel hyperspectral system is a few seconds, which is innovative in the field of microbiological tests.

Keywords: Escherichia coli, Bacillus subtilis, hyperspectral imaging, microorganisms detection

Procedia PDF Downloads 223
137 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement

Authors: Brittany Richardson, Ying Wang

Abstract:

For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.

Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments

Procedia PDF Downloads 134
136 Using Real Truck Tours Feedback for Address Geocoding Correction

Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle

Abstract:

When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.

Keywords: driver experience feedback, geocoding correction, real truck tours

Procedia PDF Downloads 674
135 Removal of Problematic Organic Compounds from Water and Wastewater Using the Arvia™ Process

Authors: Akmez Nabeerasool, Michaelis Massaros, Nigel Brown, David Sanderson, David Parocki, Charlotte Thompson, Mike Lodge, Mikael Khan

Abstract:

The provision of clean and safe drinking water is of paramount importance and is a basic human need. Water scarcity coupled with tightening of regulations and the inability of current treatment technologies to deal with emerging contaminants and Pharmaceuticals and personal care products means that alternative treatment technologies that are viable and cost effective are required in order to meet demand and regulations for clean water supplies. Logistically, the application of water treatment in rural areas presents unique challenges due to the decentralisation of abstraction points arising from low population density and the resultant lack of infrastructure as well as the need to treat water at the site of use. This makes it costly to centralise treatment facilities and hence provide potable water direct to the consumer. Furthermore, across the UK there are segments of the population that rely on a private water supply which means that the owner or user(s) of these supplies, which can serve one household to hundreds, are responsible for the maintenance. The treatment of these private water supply falls on the private owners, and it is imperative that a chemical free technological solution that can operate unattended and does not produce any waste is employed. Arvia’s patented advanced oxidation technology combines the advantages of adsorption and electrochemical regeneration within a single unit; the Organics Destruction Cell (ODC). The ODC uniquely uses a combination of adsorption and electrochemical regeneration to destroy organics. Key to this innovative process is an alternative approach to adsorption. The conventional approach is to use high capacity adsorbents (e.g. activated carbons with high porosities and surface areas) that are excellent adsorbents, but require complex and costly regeneration. Arvia’s technology uses a patent protected adsorbent, Nyex™, which is a non-porous, highly conductive, graphite based adsorbent material that enables it to act as both the adsorbent and as a 3D electrode. Adsorbed organics are oxidised and the surface of the Nyex™ is regenerated in-situ for further adsorption without interruption or replacement. Treated water flows from the bottom of the cell where it can either be re-used or safely discharged. Arvia™ Technology Ltd. has trialled the application of its tertiary water treatment technology in treating reservoir water abstracted near Glasgow, Scotland, with promising results. Several other pilot plants have also been successfully deployed at various locations in the UK showing the suitability and effectiveness of the technology in removing recalcitrant organics (including pharmaceuticals, steroids and hormones), COD and colour.

Keywords: Arvia™ process, adsorption, water treatment, electrochemical oxidation

Procedia PDF Downloads 263
134 Digitization and Morphometric Characterization of Botanical Collection of Indian Arid Zones as Informatics Initiatives Addressing Conservation Issues in Climate Change Scenario

Authors: Dipankar Saha, J. P. Singh, C. B. Pandey

Abstract:

Indian Thar desert being the seventh largest in the world is the main hot sand desert occupies nearly 385,000km2 and about 9% of the area of the country harbours several species likely the flora of 682 species (63 introduced species) belonging to 352 genera and 87 families. The degree of endemism of plant species in the Thar desert is 6.4 percent, which is relatively higher than the degree of endemism in the Sahara desert which is very significant for the conservationist to envisage. The advent and development of computer technology for digitization and data base management coupled with the rapidly increasing importance of biodiversity conservation resulted in the invention of biodiversity informatics as discipline of basic sciences with multiple applications. Aichi Target 19 as an outcome of Convention of Biological Diversity (CBD) specifically mandates the development of an advanced and shared biodiversity knowledge base. Information on species distributions in space is the crux of effective management of biodiversity in the rapidly changing world. The efficiency of biodiversity management is being increased rapidly by various stakeholders like researchers, policymakers, and funding agencies with the knowledge and application of biodiversity informatics. Herbarium specimens being a vital repository for biodiversity conservation especially in climate change scenario the digitization process usually aims to improve access and to preserve delicate specimens and in doing so creating large sets of images as a part of the existing repository as arid plant information facility for long-term future usage. As the leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens as well. As a part of this activity, laminar characterization (leaves being the most important characters in assessing climate change impact) initially resulted in classification of more than thousands collections belonging to ten families like Acanthaceae, Aizoaceae, Amaranthaceae, Asclepiadaceae, Anacardeaceae, Apocynaceae, Asteraceae, Aristolochiaceae, Berseraceae and Bignoniaceae etc. Taxonomic diversity indices has also been worked out being one of the important domain of biodiversity informatics approaches. The digitization process also encompasses workflows which incorporate automated systems to enable us to expand and speed up the digitisation process. The digitisation workflows used to be on a modular system which has the potential to be scaled up. As they are being developed with a geo-referencing tool and additional quality control elements and finally placing specimen images and data into a fully searchable, web-accessible database. Our effort in this paper is to elucidate the role of BIs, present effort of database development of the existing botanical collection of institute repository. This effort is expected to be considered as a part of various global initiatives having an effective biodiversity information facility. This will enable access to plant biodiversity data that are fit-for-use by scientists and decision makers working on biodiversity conservation and sustainable development in the region and iso-climatic situation of the world.

Keywords: biodiversity informatics, climate change, digitization, herbarium, laminar characters, web accessible interface

Procedia PDF Downloads 229
133 Developing a Framework for Designing Digital Assessments for Middle-school Aged Deaf or Hard of Hearing Students in the United States

Authors: Alexis Polanco Jr, Tsai Lu Liu

Abstract:

Research on digital assessment for deaf and hard of hearing (DHH) students is negligible. Part of this stems from the DHH assessment design existing at the intersection of the emergent disciplines of usability, accessibility, and child-computer interaction (CCI). While these disciplines have some prevailing guidelines —e.g. in user experience design (UXD), there is Jacob Nielsen’s 10 Usability Heuristics (Nielsen-10); for accessibility, there are the Web Content Accessibility Guidelines (WCAG) & the Principles of Universal Design (PUD)— this research was unable to uncover a unified set of guidelines. Given that digital assessments have lasting implications for the funding and shaping of U.S. school districts, it is vital that cross-disciplinary guidelines emerge. As a result, this research seeks to provide a framework by which these disciplines can share knowledge. The framework entails a process of asking subject-matter experts (SMEs) and design & development professionals to self-describe their fields of expertise, how their work might serve DHH students, and to expose any incongruence between their ideal process and what is permissible at their workplace. This research used two rounds of mixed methods. The first round consisted of structured interviews with SMEs in usability, accessibility, CCI, and DHH education. These practitioners were not designers by trade but were revealed to use designerly work processes. In addition to asking these SMEs about their field of expertise, work process, etc., these SMEs were asked to comment about whether they believed Nielsen-10 and/or PUD were sufficient for designing products for middle-school DHH students. This first round of interviews revealed that Nielsen-10 and PUD were, at best, a starting point for creating middle-school DHH design guidelines or, at worst insufficient. The second round of interviews followed a semi-structured interview methodology. The SMEs who were interviewed in the first round were asked open-ended follow-up questions about their semantic understanding of guidelines— going from the most general sense down to the level of design guidelines for DHH middle school students. Designers and developers who were never interviewed previously were asked the same questions that the SMEs had been asked across both rounds of interviews. In terms of the research goals: it was confirmed that the design of digital assessments for DHH students is inherently cross-disciplinary. Unexpectedly, 1) guidelines did not emerge from the interviews conducted in this study, and 2) the principles of Nielsen-10 and PUD were deemed to be less relevant than expected. Given the prevalence of Nielsen-10 in UXD curricula across academia and certificate programs, this poses a risk to the efficacy of DHH assessments designed by UX designers. Furthermore, the following findings emerged: A) deep collaboration between the disciplines of usability, accessibility, and CCI is low to non-existent; B) there are no universally agreed-upon guidelines for designing digital assessments for DHH middle school students; C) these disciplines are structured academically and professionally in such a way that practitioners may not know to reach out to other disciplines. For example, accessibility teams at large organizations do not have designers and accessibility specialists on the same team.

Keywords: deaf, hard of hearing, design, guidelines, education, assessment

Procedia PDF Downloads 67
132 Shared Vision System Support for Maintenance Tasks of Wind Turbines

Authors: Buket Celik Ünal, Onur Ünal

Abstract:

Communication is the most challenging part of maintenance operations. Communication between expert and fieldworker is crucial for effective maintenance and this also affects the safety of the fieldworkers. To support a machine user in a remote collaborative physical task, both, a mobile and a stationary device are needed. Such a system is called a shared vision system and the system supports two people to solve a problem from different places. This system reduces the errors and provides a reliable support for qualified and less qualified users. Through this research, it was aimed to validate the effectiveness of using a shared vision system to facilitate communication between on-site workers and those issuing instructions regarding maintenance or inspection works over long distances. The system is designed with head-worn display which is called a shared vision system. As a part of this study, a substitute system is used and implemented by using a shared vision system for maintenance operation. The benefits of the use of a shared vision system are analyzed and results are adapted to the wind turbines to improve the occupational safety and health for maintenance technicians. The motivation for the research effort in this study can be summarized in the following research questions: -How can expert support technician over long distances during maintenance operation? -What are the advantages of using a shared vision system? Experience from the experiment shows that using a shared vision system is an advantage for both electrical and mechanical system failures. Results support that the shared vision system can be used for wind turbine maintenance and repair tasks. Because wind turbine generator/gearbox and the substitute system have similar failures. Electrical failures, such as voltage irregularities, wiring failures and mechanical failures, such as alignment, vibration, over-speed conditions are the common and similar failures for both. Furthermore, it was analyzed the effectiveness of the shared vision system by using a smart glasses in connection with the maintenance task performed by a substitute system under four different circumstances, namely by using a shared vision system, an audio communication, a smartphone and by yourself condition. A suitable method for determining dependencies between factors measured in Chi Square Test, and Chi Square Test for Independence measured for determining a relationship between two qualitative variables and finally Mann Whitney U Test is used to compare any two data sets. While based on this experiment, no relation was found between the results and the gender. Participants` responses confirmed that the shared vision system is efficient and helpful for maintenance operations. From the results of the research, there was a statistically significant difference in the average time taken by subjects on works using a shared vision system under the other conditions. Additionally, this study confirmed that a shared vision system provides reduction in time to diagnose and resolve maintenance issues, reduction in diagnosis errors, reduced travel costs for experts, and increased reliability in service.

Keywords: communication support, maintenance and inspection tasks, occupational health and safety, shared vision system

Procedia PDF Downloads 260
131 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 532
130 Partisan Agenda Setting in Digital Media World

Authors: Hai L. Tran

Abstract:

Previous research on agenda setting effects has often focused on the top-down influence of the media at the aggregate level, while overlooking the capacity of audience members to select media and content to fit their individual dispositions. The decentralized characteristics of online communication and digital news create more choices and greater user control, thereby enabling each audience member to seek out a unique blend of media sources, issues, and elements of messages and to mix them into a coherent individual picture of the world. This study examines how audiences use media differently depending on their prior dispositions, thereby making sense of the world in ways that are congruent with their preferences and cognitions. The current undertaking is informed by theoretical frameworks from two distinct lines of scholarship. According to the ideological migration hypothesis, individuals choose to live in communities with ideologies like their own to satisfy their need to belong. One tends to move away from Zip codes that are incongruent and toward those that are more aligned with one’s ideological orientation. This geographical division along ideological lines has been documented in social psychology research. As an extension of agenda setting, the agendamelding hypothesis argues that audiences seek out information in attractive media and blend them into a coherent narrative that fits with a common agenda shared by others, who think as they do and communicate with them about issues of public. In other words, individuals, through their media use, identify themselves with a group/community that they want to join. Accordingly, the present study hypothesizes that because ideology plays a role in pushing people toward a physical community that fits their need to belong, it also leads individuals to receive an idiosyncratic blend of media and be influenced by such selective exposure in deciding what issues are more relevant. Consequently, the individualized focus of media choices impacts how audiences perceive political news coverage and what they know about political issues. The research project utilizes recent data from The American Trends Panel survey conducted by Pew Research Center to explore the nuanced nature of agenda setting at the individual level and amid heightened polarization. Hypothesis testing is performed with both nonparametric and parametric procedures, including regression and path analysis. This research attempts to explore the media-public relationship from a bottom-up approach, considering the ability of active audience members to select among media in a larger process that entails agenda setting. It helps encourage agenda-setting scholars to further examine effects at the individual, rather than aggregate, level. In addition to theoretical contributions, the study’s findings are useful for media professionals in building and maintaining relationships with the audience considering changes in market share due to the spread of digital and social media.

Keywords: agenda setting, agendamelding, audience fragmentation, ideological migration, partisanship, polarization

Procedia PDF Downloads 59
129 Monitoring of Indoor Air Quality in Museums

Authors: Olympia Nisiforou

Abstract:

The cultural heritage of each country represents a unique and irreplaceable witness of the past. Nevertheless, on many occasions, such heritage is extremely vulnerable to natural disasters and reckless behaviors. Even if such exhibits are now located in Museums, they still receive insufficient protection due to improper environmental conditions. These external changes can negatively affect the conditions of the exhibits and contribute to inefficient maintenance in time. Hence, it is imperative to develop an innovative, low-cost system, to monitor indoor air quality systematically, since conventional methods are quite expensive and time-consuming. The present study gives an insight into the indoor air quality of the National Byzantine Museum of Cyprus. In particular, systematic measurements of particulate matter, bio-aerosols, the concentration of targeted chemical pollutants (including Volatile organic compounds (VOCs), temperature, relative humidity, and lighting conditions as well as microbial counts have been performed using conventional techniques. Measurements showed that most of the monitored physiochemical parameters did not vary significantly within the various sampling locations. Seasonal fluctuations of ammonia were observed, showing higher concentrations in the summer and lower in winter. It was found that the outdoor environment does not significantly affect indoor air quality in terms of VOC and Nitrogen oxides (NOX). A cutting-edge portable Gas Chromatography-Mass Spectrometry (GC-MS) system (TORION T-9) was used to identify and measure the concentrations of specific Volatile and Semi-volatile Organic Compounds. A large number of different VOCs and SVOCs found such as Benzene, Toluene, Xylene, Ethanol, Hexadecane, and Acetic acid, as well as some more complex compounds such as 3-ethyl-2,4-dimethyl-Isopropyl alcohol, 4,4'-biphenylene-bis-(3-aminobenzoate) and trifluoro-2,2-dimethylpropyl ester. Apart from the permanent indoor/outdoor sources (i.e., wooden frames, painted exhibits, carpets, ventilation system and outdoor air) of the above organic compounds, the concentration of some of them within the areas of the museum were found to increase when large groups of visitors were simultaneously present at a specific place within the museum. The high presence of Particulate Matter (PM), fungi and bacteria were found in the museum’s areas where carpets were present but low colonial counts were found in rooms where artworks are exhibited. Measurements mentioned above were used to validate an innovative low-cost air-quality monitoring system that has been developed within the present work. The developed system is able to monitor the average concentrations (on a bidaily basis) of several pollutants and presents several innovative features, including the prompt alerting in case of increased average concentrations of monitored pollutants, i.e., exceeding the limit values defined by the user.

Keywords: exibitions, indoor air quality , VOCs, pollution

Procedia PDF Downloads 123
128 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley

Abstract:

Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.

Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 79
127 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 108
126 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing

Authors: Rowan P. Martnishn

Abstract:

During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.

Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding

Procedia PDF Downloads 28
125 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 101
124 Interpersonal Competence Related to the Practice Learning of Occupational Therapy Students in Hong Kong

Authors: Lik Hang Gary Wong

Abstract:

Background: Practice learning is crucial for preparing the healthcare profession to meet the real challenge upon graduation. Students are required to demonstrate their competence in managing interpersonal challenges, such as teamwork with other professionals and communicating well with the service users, during the placement. Such competence precedes clinical practice, and it may eventually affect students' actual performance in a clinical context. Unfortunately, there were limited studies investigating how such competence affects students' performance in practice learning. Objectives: The aim of this study is to investigate how self-rated interpersonal competence affects students' actual performance during clinical placement. Methods: 40 occupational therapy students from Hong Kong were recruited in this study. Prior to the clinical placement (level two or above), they completed an online survey that included the Interpersonal Communication Competence Scale (ICCS) measuring self-perceived competence in interpersonal communication. Near the end of their placement, the clinical educator rated students’ performance with the Student Practice Evaluation Form - Revised edition (SPEF-R). The SPEF-R measures the eight core competency domains required for an entry-level occupational therapist. This study adopted the cross-sectional observational design. Pearson correlation and multiple regression are conducted to examine the relationship between students' interpersonal communication competence and their actual performance in clinical placement. Results: The ICCS total scores were significantly correlated with all the SPEF-R domains, with correlation coefficient r ranging from 0.39 to 0.51. The strongest association was found with the co-worker communication domain (r = 0.51, p < 0.01), followed by the information gathering domain (r = 0.50, p < 0.01). Regarding the ICCS total scores as the independent variable and the rating in various SPEF-R domains as the dependent variables in the multiple regression analyses, the interpersonal competence measures were identified as a significant predictor of the co-worker communication (R² = 0.33, β = 0.014, SE = 0.006, p = 0.026), information gathering (R² = 0.27, β = 0.018, SE = 0.007, p = 0.011), and service provision (R² = 0.17, β = 0.017, SE = 0.007, p = 0.020). Moreover, some specific communication skills appeared to be especially important to clinical practice. For example, immediacy, which means whether the students were readily approachable on all social occasions, correlated with all the SPEF-R domains, with r-values ranging from 0.45 to 0.33. Other sub-skills, such as empathy, interaction management, and supportiveness, were also found to be significantly correlated to most of the SPEF-R domains. Meanwhile, the ICCS scores correlated differently with the co-worker communication domain (r = 0.51, p < 0.01) and the communication with the service user domain (r = 0.39, p < 0.05). It suggested that different communication skill sets would be required for different interpersonal contexts within the workplace. Conclusion: Students' self-perceived interpersonal communication competence could predict their actual performance during clinical placement. Moreover, some specific communication skills were more important to the co-worker communication but not to the daily interaction with the service users. There were implications on how to better prepare the students to meet the future challenge upon graduation.

Keywords: interpersonal competence, clinical education, healthcare professional education, occupational therapy, occupational therapy students

Procedia PDF Downloads 71
123 Fueling Efficient Reporting And Decision-Making In Public Health With Large Data Automation In Remote Areas, Neno Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Julia Huggins, Fabien Munyaneza

Abstract:

Background: Partners In Health – Malawi introduced one of Operational Researches called Primary Health Care (PHC) Surveys in 2020, which seeks to assess progress of delivery of care in the district. The study consists of 5 long surveys, namely; Facility assessment, General Patient, Provider, Sick Child, Antenatal Care (ANC), primarily conducted in 4 health facilities in Neno district. These facilities include Neno district hospital, Dambe health centre, Chifunga and Matope. Usually, these annual surveys are conducted from January, and the target is to present final report by June. Once data is collected and analyzed, there are a series of reviews that take place before reaching final report. In the first place, the manual process took over 9 months to present final report. Initial findings reported about 76.9% of the data that added up when cross-checked with paper-based sources. Purpose: The aim of this approach is to run away from manually pulling the data, do fresh analysis, and reporting often associated not only with delays in reporting inconsistencies but also with poor quality of data if not done carefully. This automation approach was meant to utilize features of new technologies to create visualizations, reports, and dashboards in Power BI that are directly fished from the data source – CommCare hence only require a single click of a ‘refresh’ button to have the updated information populated in visualizations, reports, and dashboards at once. Methodology: We transformed paper-based questionnaires into electronic using CommCare mobile application. We further connected CommCare Mobile App directly to Power BI using Application Program Interface (API) connection as data pipeline. This provided chance to create visualizations, reports, and dashboards in Power BI. Contrary to the process of manually collecting data in paper-based questionnaires, entering them in ordinary spreadsheets, and conducting analysis every time when preparing for reporting, the team utilized CommCare and Microsoft Power BI technologies. We utilized validations and logics in CommCare to capture data with less errors. We utilized Power BI features to host the reports online by publishing them as cloud-computing process. We switched from sharing ordinary report files to sharing the link to potential recipients hence giving them freedom to dig deep into extra findings within Power BI dashboards and also freedom to export to any formats of their choice. Results: This data automation approach reduced research timelines from the initial 9 months’ duration to 5. It also improved the quality of the data findings from the original 76.9% to 98.9%. This brought confidence to draw conclusions from the findings that help in decision-making and gave opportunities for further researches. Conclusion: These results suggest that automating the research data process has the potential of reducing overall amount of time spent and improving the quality of the data. On this basis, the concept of data automation should be taken into serious consideration when conducting operational research for efficiency and decision-making.

Keywords: reporting, decision-making, power BI, commcare, data automation, visualizations, dashboards

Procedia PDF Downloads 116
122 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation

Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy

Abstract:

The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.

Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis

Procedia PDF Downloads 406
121 Improving the Utility of Social Media in Pharmacovigilance: A Mixed Methods Study

Authors: Amber Dhoot, Tarush Gupta, Andrea Gurr, William Jenkins, Sandro Pietrunti, Alexis Tang

Abstract:

Background: The COVID-19 pandemic has driven pharmacovigilance towards a new paradigm. Nowadays, more people than ever before are recognising and reporting adverse reactions from medications, treatments, and vaccines. In the modern era, with over 3.8 billion users, social media has become the most accessible medium for people to voice their opinions and so provides an opportunity to engage with more patient-centric and accessible pharmacovigilance. However, the pharmaceutical industry has been slow to incorporate social media into its modern pharmacovigilance strategy. This project aims to make social media a more effective tool in pharmacovigilance, and so reduce drug costs, improve drug safety and improve patient outcomes. This will be achieved by firstly uncovering and categorising the barriers facing the widespread adoption of social media in pharmacovigilance. Following this, the potential opportunities of social media will be explored. We will then propose realistic, practical recommendations to make social media a more effective tool for pharmacovigilance. Methodology: A comprehensive systematic literature review was conducted to produce a categorised summary of these barriers. This was followed by conducting 11 semi-structured interviews with pharmacovigilance experts to confirm the literature review findings whilst also exploring the unpublished and real-life challenges faced by those in the pharmaceutical industry. Finally, a survey of the general public (n = 112) ascertained public knowledge, perception, and opinion regarding the use of their social media data for pharmacovigilance purposes. This project stands out by offering perspectives from the public and pharmaceutical industry that fill the research gaps identified in the literature review. Results: Our results gave rise to several key analysis points. Firstly, inadequacies of current Natural Language Processing algorithms hinder effective pharmacovigilance data extraction from social media, and where data extraction is possible, there are significant questions over its quality. Social media also contains a variety of biases towards common drugs, mild adverse drug reactions, and the younger generation. Additionally, outdated regulations for social media pharmacovigilance do not align with new, modern General Data Protection Regulations (GDPR), creating ethical ambiguity about data privacy and level of access. This leads to an underlying mindset of avoidance within the pharmaceutical industry, as firms are disincentivised by the legal, financial, and reputational risks associated with breaking ambiguous regulations. Conclusion: Our project uncovered several barriers that prevent effective pharmacovigilance on social media. As such, social media should be used to complement traditional sources of pharmacovigilance rather than as a sole source of pharmacovigilance data. However, this project adds further value by proposing five practical recommendations that improve the effectiveness of social media pharmacovigilance. These include: prioritising health-orientated social media; improving technical capabilities through investment and strategic partnerships; setting clear regulatory guidelines using multi-stakeholder processes; creating an adverse drug reaction reporting interface inbuilt into social media platforms; and, finally, developing educational campaigns to raise awareness of the use of social media in pharmacovigilance. Implementation of these recommendations would speed up the efficient, ethical, and systematic adoption of social media in pharmacovigilance.

Keywords: adverse drug reaction, drug safety, pharmacovigilance, social media

Procedia PDF Downloads 81
120 Exploring the Dose-Response Association of Lifestyle Behaviors and Mental Health among High School Students in the US: A Secondary Analysis of 2021 Adolescent Behaviors and Experiences Survey Data

Authors: Layla Haidar, Shari Esquenazi-Karonika

Abstract:

Introduction: Mental health includes one’s emotional, psychological, and interpersonal well-being; it ranges from “good” to “poor” on a continuum. At the individual-level, it affects how a person thinks, feels, and acts. Moreover, it determines how they cope with stress, relate to others, and interface with their surroundings. Research has yielded that mental health is directly related with short- and long-term physical health (including chronic disease), health risk behaviors, education-level, employment, and social relationships. As is the case with physical conditions like diabetes, heart disease, and cancer, mitigating the behavioral and genetic risks of debilitating mental health conditions like anxiety and depression can nurture a healthier quality of mental health throughout one’s life. In order to maximize the benefits of prevention, it is important to identify modifiable risks and develop protective habits earlier in life. Methods: The Adolescent Behaviors and Experiences Survey (ABES) dataset was used for this study. The ABES survey was administered to high school students (9th-12th grade) during January 2021- June 2021 by the Centers for Disease Control and Prevention (CDC). The data was analyzed to identify any associations between feelings of sadness, hopelessness, or increased suicidality among high school students with relation to their participation on one or more sports teams and their average daily consumed screen time. Data was analyzed using descriptive and multivariable analytic techniques. A multinomial logistic regression of each variable was conducted to examine if there was an association, while controlling for grade-level, sex, and race. Results: The findings from this study are insightful for administrators and policymakers who wish to address mounting concerns related to student mental health. The study revealed that compared to a student who participated on zero sports teams, students who participated in 1 or more sports teams showed a significantly increased risk of depression (p<0.05). Conversely, the rate of depression in students was significantly less in those who consumed 5 or more hours of screen time per day, compared to those who consumed less than 1 hour per day of screen time (p<0.05). Conclusion: These findings are informative and highlight the importance of understanding the nuances of student participation on sports teams (e.g., physical exertion, social dynamics of team, and the level of competitiveness within the sport). Likewise, the context of an individual’s screen time (e.g., social media, engaging in team-based video games, or watching television) can inform parental or school-based policies about screen time activity. Although physical activity has been proven to be important for emotional and physical well-being of youth, playing on multiple teams could have negative consequences on the emotional state of high school students potentially due to fatigue, overtraining, and injuries. Existing literature has highlighted the negative effects of screen time; however, further research needs to consider the type of screen-based consumption to better understand its effects on mental health.

Keywords: behavioral science, mental health, adolescents, prevention

Procedia PDF Downloads 105
119 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)

Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula

Abstract:

This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.

Keywords: MINLP, mixed-integer non-linear programming, optimization, structures

Procedia PDF Downloads 46
118 Membrane Permeability of Middle Molecules: A Computational Chemistry Approach

Authors: Sundaram Arulmozhiraja, Kanade Shimizu, Yuta Yamamoto, Satoshi Ichikawa, Maenaka Katsumi, Hiroaki Tokiwa

Abstract:

Drug discovery is shifting from small molecule based drugs targeting local active site to middle molecules (MM) targeting large, flat, and groove-shaped binding sites, for example, protein-protein interface because at least half of all targets assumed to be involved in human disease have been classified as “difficult to drug” with traditional small molecules. Hence, MMs such as peptides, natural products, glycans, nucleic acids with various high potent bioactivities become important targets for drug discovery programs in the recent years as they could be used for ‘undruggable” intracellular targets. Cell membrane permeability is one of the key properties of pharmacodynamically active MM drug compounds and so evaluating this property for the potential MMs is crucial. Computational prediction for cell membrane permeability of molecules is very challenging; however, recent advancement in the molecular dynamics simulations help to solve this issue partially. It is expected that MMs with high membrane permeability will enable drug discovery research to expand its borders towards intracellular targets. Further to understand the chemistry behind the permeability of MMs, it is necessary to investigate their conformational changes during the permeation through membrane and for that their interactions with the membrane field should be studied reliably because these interactions involve various non-bonding interactions such as hydrogen bonding, -stacking, charge-transfer, polarization dispersion, and non-classical weak hydrogen bonding. Therefore, parameters-based classical mechanics calculations are hardly sufficient to investigate these interactions rather, quantum mechanical (QM) calculations are essential. Fragment molecular orbital (FMO) method could be used for such purpose as it performs ab initio QM calculations by dividing the system into fragments. The present work is aimed to study the cell permeability of middle molecules using molecular dynamics simulations and FMO-QM calculations. For this purpose, a natural compound syringolin and its analogues were considered in this study. Molecular simulations were performed using NAMD and Gromacs programs with CHARMM force field. FMO calculations were performed using the PAICS program at the correlated Resolution-of-Identity second-order Moller Plesset (RI-MP2) level with the cc-pVDZ basis set. The simulations clearly show that while syringolin could not permeate the membrane, its selected analogues go through the medium in nano second scale. These correlates well with the existing experimental evidences that these syringolin analogues are membrane-permeable compounds. Further analyses indicate that intramolecular -stacking interactions in the syringolin analogues influenced their permeability positively. These intramolecular interactions reduce the polarity of these analogues so that they could permeate the lipophilic cell membrane. Conclusively, the cell membrane permeability of various middle molecules with potent bioactivities is efficiently studied using molecular dynamics simulations. Insight of this behavior is thoroughly investigated using FMO-QM calculations. Results obtained in the present study indicate that non-bonding intramolecular interactions such as hydrogen-bonding and -stacking along with the conformational flexibility of MMs are essential for amicable membrane permeation. These results are interesting and are nice example for this theoretical calculation approach that could be used to study the permeability of other middle molecules. This work was supported by Japan Agency for Medical Research and Development (AMED) under Grant Number 18ae0101047.

Keywords: fragment molecular orbital theory, membrane permeability, middle molecules, molecular dynamics simulation

Procedia PDF Downloads 188
117 Examining the Drivers of Engagement in Social Media Brand Communities

Authors: Rania S. Hussein

Abstract:

This research mainly focuses on examining engagement in social media brand communities. Engagement in social media has become a main focus in literature affirming that the role of social media in our daily lives is growing. (Akman and Mishra, 2017;Prado-Gascó et al., 2017). Social media has also become a key medium for brand communication and brand building relationships(Frimpong and McLean,2018;Dimitriu and Guesalaga, 2017). Engagement on social media has become a main focus of many researchers who tried to understand this concept further and draw a link between engagement and various social media activities (Cvijikj and Michahelles;2013), Andre,2015; Wang et al., 2015). According to Felix et al. (2017), the internet and social media have provided better digital resources to improve brand loyalty and customer interactions, thus leading to social media engagement within brand communities. The aim of this research is to highlight the importance of social media and why it is important to maintain engagement within social media. While the term ‘engagement’ is widely used in scholarly literature, there isn’t a common consensus about what the term exactly entails, according to Kidd, (2011). On one hand, it was seen as something that includes factors such as participation, activation, empowerment, devotion, trust, and productivity (Zhang et al, andBenyoucef, M. (2016), ). Other scholars held different viewpoints. For example, Lim et al. (2015) has chosen to break down engagement into three types: operational engagement, emotional engagement, and relational engagement. Chandler and Lusch (2015) further studied engagement as a means to measure commitment to a brand. Fernandes&Remelhe (2016) had a more technical view, measuring engagement through comments, following, subscribing, sharing, enjoying, writing, etc., in the social media context. ustomer engagement has become a research focus for understanding how consumer relationships are developed, retained, and improved within a digital context. Based on previous literature, it is evident that many customer engagement related studies are limited to the interaction between firms and consumers on social media. There is a clear gap in the literature regarding consumer-to-consumer interaction and user-generated content and its significance. While some researchers, such as Alversia et al. (2016), touched upon the importance of customer-based engagement, a gap still remains: there is no consistent and well-tested method for defining the factors that affect consumer interaction. Moreover, few scholarly research papers such as (Case, 2019; Riley, 2020;Habibi, 2014) provided to assist businesses understand their customers' interaction habits as well as the best ways to develop customer loyalty. Additionally, the majority of research on brand pages concentrated on the drivers of Consumer engagement, with just a few studies example, Lamberton, Cc(2016), Poorrezaei, (2016). (Jayasingh, 2019), looking into the implications. This study focuses on understanding the concept of engagement and its importance, specifically engagement within social media brand communities. It examines drivers as well as consequences of engagement, including brand knowledge, brand trust, entertainment, and brand page interactivity. Brand engagement is also expected to affect brand loyalty and word of the mouth.

Keywords: engagement, social media, brand communities, drivers

Procedia PDF Downloads 160
116 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms

Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli

Abstract:

Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.

Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning

Procedia PDF Downloads 73
115 Linguistic Analysis of Borderline Personality Disorder: Using Language to Predict Maladaptive Thoughts and Behaviours

Authors: Charlotte Entwistle, Ryan Boyd

Abstract:

Recent developments in information retrieval techniques and natural language processing have allowed for greater exploration of psychological and social processes. Linguistic analysis methods for understanding behaviour have provided useful insights within the field of mental health. One area within mental health that has received little attention though, is borderline personality disorder (BPD). BPD is a common mental health disorder characterised by instability of interpersonal relationships, self-image and affect. It also manifests through maladaptive behaviours, such as impulsivity and self-harm. Examination of language patterns associated with BPD could allow for a greater understanding of the disorder and its links to maladaptive thoughts and behaviours. Language analysis methods could also be used in a predictive way, such as by identifying indicators of BPD or predicting maladaptive thoughts, emotions and behaviours. Additionally, associations that are uncovered between language and maladaptive thoughts and behaviours could then be applied at a more general level. This study explores linguistic characteristics of BPD, and their links to maladaptive thoughts and behaviours, through the analysis of social media data. Data were collected from a large corpus of posts from the publicly available social media platform Reddit, namely, from the ‘r/BPD’ subreddit whereby people identify as having BPD. Data were collected using the Python Reddit API Wrapper and included all users which had posted within the BPD subreddit. All posts were manually inspected to ensure that they were not posted by someone who clearly did not have BPD, such as people posting about a loved one with BPD. These users were then tracked across all other subreddits of which they had posted in and data from these subreddits were also collected. Additionally, data were collected from a random control group of Reddit users. Disorder-relevant behaviours, such as self-harming or aggression-related behaviours, outlined within Reddit posts were coded to by expert raters. All posts and comments were aggregated by user and split by subreddit. Language data were then analysed using the Linguistic Inquiry and Word Count (LIWC) 2015 software. LIWC is a text analysis program that identifies and categorises words based on linguistic and paralinguistic dimensions, psychological constructs and personal concern categories. Statistical analyses of linguistic features could then be conducted. Findings revealed distinct linguistic features associated with BPD, based on Reddit posts, which differentiated these users from a control group. Language patterns were also found to be associated with the occurrence of maladaptive thoughts and behaviours. Thus, this study demonstrates that there are indeed linguistic markers of BPD present on social media. It also implies that language could be predictive of maladaptive thoughts and behaviours associated with BPD. These findings are of importance as they suggest potential for clinical interventions to be provided based on the language of people with BPD to try to reduce the likelihood of maladaptive thoughts and behaviours occurring. For example, by social media tracking or engaging people with BPD in expressive writing therapy. Overall, this study has provided a greater understanding of the disorder and how it manifests through language and behaviour.

Keywords: behaviour analysis, borderline personality disorder, natural language processing, social media data

Procedia PDF Downloads 349
114 The Asymptotic Hole Shape in Long Pulse Laser Drilling: The Influence of Multiple Reflections

Authors: Torsten Hermanns, You Wang, Stefan Janssen, Markus Niessen, Christoph Schoeler, Ulrich Thombansen, Wolfgang Schulz

Abstract:

In long pulse laser drilling of metals, it can be demonstrated that the ablation shape approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from ultra short pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in long pulse drilling of metals is identified, a model for the description of the asymptotic hole shape numerically implemented, tested and clearly confirmed by comparison with experimental data. The model assumes a robust process in that way that the characteristics of the melt flow inside the arising melt film does not change qualitatively by changing the laser or processing parameters. Only robust processes are technically controllable and thus of industrial interest. The condition for a robust process is identified by a threshold for the mass flow density of the assist gas at the hole entrance which has to be exceeded. Within a robust process regime the melt flow characteristics can be captured by only one model parameter, namely the intensity threshold. In analogy to USP ablation (where it is already known for a long time that the resulting hole shape results from a threshold for the absorbed laser fluency) it is demonstrated that in the case of robust long pulse ablation the asymptotic shape forms in that way that along the whole contour the absorbed heat flux density is equal to the intensity threshold. The intensity threshold depends on the special material and radiation properties and has to be calibrated be one reference experiment. The model is implemented in a numerical simulation which is called AsymptoticDrill and requires such a few amount of resources that it can run on common desktop PCs, laptops or even smart devices. Resulting hole shapes can be calculated within seconds what depicts a clear advantage over other simulations presented in literature in the context of industrial every day usage. Against this background the software additionally is equipped with a user-friendly GUI which allows an intuitive usage. Individual parameters can be adjusted using sliders while the simulation result appears immediately in an adjacent window. A platform independent development allow a flexible usage: the operator can use the tool to adjust the process in a very convenient manner on a tablet during the developer can execute the tool in his office in order to design new processes. Furthermore, at the best knowledge of the authors AsymptoticDrill is the first simulation which allows the import of measured real beam distributions and thus calculates the asymptotic hole shape on the basis of the real state of the specific manufacturing system. In this paper the emphasis is placed on the investigation of the effect of multiple reflections on the asymptotic hole shape which gain in importance when drilling holes with large aspect ratios.

Keywords: asymptotic hole shape, intensity threshold, long pulse laser drilling, robust process

Procedia PDF Downloads 213
113 Developing Computational Thinking in Early Childhood Education

Authors: Kalliopi Kanaki, Michael Kalogiannakis

Abstract:

Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.

Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses

Procedia PDF Downloads 120
112 India’s Energy Transition, Pathways for Green Economy

Authors: B. Sudhakara Reddy

Abstract:

In modern economy, energy is fundamental to virtually every product and service in use. It has been developed on the dependence of abundant and easy-to-transform polluting fossil fuels. On one hand, increase in population and income levels combined with increased per capita energy consumption requires energy production to keep pace with economic growth, and on the other, the impact of fossil fuel use on environmental degradation is enormous. The conflicting policy objectives of protecting the environment while increasing economic growth and employment has resulted in this paradox. Hence, it is important to decouple economic growth from environmental degeneration. Hence, the search for green energy involving affordable, low-carbon, and renewable energies has become global priority. This paper explores a transition to a sustainable energy system using the socio-economic-technical scenario method. This approach takes into account the multifaceted nature of transitions which not only require the development and use of new technologies, but also of changes in user behaviour, policy and regulation. The scenarios that are developed are: baseline business as usual (BAU) as well as green energy (GE). The baseline scenario assumes that the current trends (energy use, efficiency levels, etc.) will continue in future. India’s population is projected to grow by 23% during 2010 –2030, reaching 1.47 billion. The real GDP, as per the model, is projected to grow by 6.5% per year on average between 2010 and 2030 reaching US$5.1 trillion or $3,586 per capita (base year 2010). Due to increase in population and GDP, the primary energy demand will double in two decades reaching 1,397 MTOE in 2030 with the share of fossil fuels remaining around 80%. The increase in energy use corresponds to an increase in energy intensity (TOE/US $ of GDP) from 0.019 to 0.036. The carbon emissions are projected to increase by 2.5 times from 2010 reaching 3,440 million tonnes with per capita emissions of 2.2 tons/annum. However, the carbon intensity (tons per US$ of GDP) decreases from 0.96 to 0.67. As per GE scenario, energy use will reach 1079 MTOE by 2030, a saving of about 30% over BAU. The penetration rate of renewable energy resources will reduce the total primary energy demand by 23% under GE. The reduction in fossil fuel demand and focus on clean energy will reduce the energy intensity to 0.21 (TOE/US$ of GDP) and carbon intensity to 0.42 (ton/US$ of GDP) under the GE scenario. The study develops new ‘pathways out of poverty’ by creating more than 10 million jobs and thus raise the standard of living of low-income people. Our scenarios are, to a great extent, based on the existing technologies. The challenges to this path lie in socio-economic-political domains. However, to attain a green economy the appropriate policy package should be in place which will be critical in determining the kind of investments that will be needed and the incidence of costs and benefits. These results provide a basis for policy discussions on investments, policies and incentives to be put in place by national and local governments.

Keywords: energy, renewables, green technology, scenario

Procedia PDF Downloads 248
111 Culture and Health Equity: Unpacking the Sociocultural Determinants of Eye Health for Indigenous Australian Diabetics

Authors: Aryati Yashadhana, Ted Fields Jnr., Wendy Fernando, Kelvin Brown, Godfrey Blitner, Francis Hayes, Ruby Stanley, Brian Donnelly, Bridgette Jerrard, Anthea Burnett, Anthony B. Zwi

Abstract:

Indigenous Australians experience some of the worst health outcomes globally, with life expectancy being significantly poorer than those of non-Indigenous Australians. This is largely attributed to preventable diseases such as diabetes (prevalence 39% in Indigenous Australian adults > 55 years), which is attributed to a raised risk of diabetic visual impairment and cataract among Indigenous adults. Our study aims to explore the interface between structural and sociocultural determinants and human agency, in order to understand how they impact (1) accessibility of eye health and chronic disease services and (2) the potential for Indigenous patients to achieve positive clinical eye health outcomes. We used Participatory Action Research methods, and aimed to privilege the voices of Indigenous people through community collaboration. Semi-structured interviews (n=82) and patient focus groups (n=8) were conducted by Indigenous Community-Based Researchers (CBRs) with diabetic Indigenous adults (> 40 years) in four remote communities in Australia. Interviews (n=25) and focus groups (n=4) with primary health care clinicians in each community were also conducted. Data were audio recorded, transcribed verbatim, and analysed thematically using grounded theory, comparative analysis and Nvivo 10. Preliminary analysis occurred in tandem with data collection to determine theoretical saturation. The principal investigator (AY) led analysis sessions with CBRs, fostering cultural and contextual appropriateness to interpreting responses, knowledge exchange and capacity building. Identified themes were conceptualised into three spheres of influence: structural (health services, government), sociocultural (Indigenous cultural values, distrust of the health system, ongoing effects of colonialism and dispossession) and individual (health beliefs/perceptions, patient phenomenology). Permeating these spheres of influence were three core determinants: economic disadvantage, health literacy/education, and cultural marginalisation. These core determinants affected accessibility of services, and the potential for patients to achieve positive clinical outcomes at every level of care (primary, secondary, tertiary). Our findings highlight the clinical realities of institutionalised and structural inequities, illustrated through the lived experiences of Indigenous patients and primary care clinicians in the four sampled communities. The complex determinants surrounding inequity in health for Indigenous Australians, are entrenched through a longstanding experience of cultural discrimination and ostracism. Secure and long term funding of Aboriginal Community Controlled Health Services will be valuable, but are insufficient to address issues of inequity. Rather, working collaboratively with communities to build trust, and identify needs and solutions at the grassroots level, while leveraging community voices to drive change at the systemic/policy level are recommended.

Keywords: indigenous, Australia, culture, public health, eye health, diabetes, social determinants of health, sociology, anthropology, health equity, aboriginal and Torres strait islander, primary care

Procedia PDF Downloads 300
110 Automated System: Managing the Production and Distribution of Radiopharmaceuticals

Authors: Shayma Mohammed, Adel Trabelsi

Abstract:

Radiopharmacy is the art of preparing high-quality, radioactive, medicinal products for use in diagnosis and therapy. Radiopharmaceuticals unlike normal medicines, this dual aspect (radioactive, medical) makes their management highly critical. One of the most convincing applications of modern technologies is the ability to delegate the execution of repetitive tasks to programming scripts. Automation has found its way to the most skilled jobs, to improve the company's overall performance by allowing human workers to focus on more important tasks than document filling. This project aims to contribute to implement a comprehensive system to insure rigorous management of radiopharmaceuticals through the use of a platform that links the Nuclear Medicine Service Management System to the Nuclear Radio-pharmacy Management System in accordance with the recommendations of World Health Organization (WHO) and International Atomic Energy Agency (IAEA). In this project we attempt to build a web application that targets radiopharmacies, the platform is built atop the inherently compatible web stack which allows it to work in virtually any environment. Different technologies are used in this project (PHP, Symfony, MySQL Workbench, Bootstrap, Angular 7, Visual Studio Code and TypeScript). The operating principle of the platform is mainly based on two parts: Radiopharmaceutical Backoffice for the Radiopharmacian, who is responsible for the realization of radiopharmaceutical preparations and their delivery and Medical Backoffice for the Doctor, who holds the authorization for the possession and use of radionuclides and he/she is responsible for ordering radioactive products. The application consists of sven modules: Production, Quality Control/Quality Assurance, Release, General Management, References, Transport and Stock Management. It allows 8 classes of users: The Production Manager (PM), Quality Control Manager (QCM), Stock Manager (SM), General Manager (GM), Client (Doctor), Parking and Transport Manager (PTM), Qualified Person (QP) and Technical and Production Staff. Digital platform bringing together all players involved in the use of radiopharmaceuticals and integrating the stages of preparation, production and distribution, Web technologies, in particular, promise to offer all the benefits of automation while requiring no more than a web browser to act as a user client, which is a strength because the web stack is by nature multi-platform. This platform will provide a traceability system for radiopharmaceuticals products to ensure the safety and radioprotection of actors and of patients. The new integrated platform is an alternative to write all the boilerplate paperwork manually, which is a tedious and error-prone task. It would minimize manual human manipulation, which has proven to be the main source of error in nuclear medicine. A codified electronic transfer of information from radiopharmaceutical preparation to delivery will further reduce the risk of maladministration.

Keywords: automated system, management, radiopharmacy, technical papers

Procedia PDF Downloads 156
109 Evaluation of the Role of Advocacy and the Quality of Care in Reducing Health Inequalities for People with Autism, Intellectual and Developmental Disabilities at Sheffield Teaching Hospitals

Authors: Jonathan Sahu, Jill Aylott

Abstract:

Individuals with Autism, Intellectual and Developmental disabilities (AIDD) are one of the most vulnerable groups in society, hampered not only by their own limitations to understand and interact with the wider society, but also societal limitations in perception and understanding. Communication to express their needs and wishes is fundamental to enable such individuals to live and prosper in society. This research project was designed as an organisational case study, in a large secondary health care hospital within the National Health Service (NHS), to assess the quality of care provided to people with AIDD and to review the role of advocacy to reduce health inequalities in these individuals. Methods: The research methodology adopted was as an “insider researcher”. Data collection included both quantitative and qualitative data i.e. a mixed method approach. A semi-structured interview schedule was designed and used to obtain qualitative and quantitative primary data from a wide range of interdisciplinary frontline health care workers to assess their understanding and awareness of systems, processes and evidence based practice to offer a quality service to people with AIDD. Secondary data were obtained from sources within the organisation, in keeping with “Case Study” as a primary method, and organisational performance data were then compared against national benchmarking standards. Further data sources were accessed to help evaluate the effectiveness of different types of advocacy that were present in the organisation. This was gauged by measures of user and carer experience in the form of retrospective survey analysis, incidents and complaints. Results: Secondary data demonstrate near compliance of the Organisation with the current national benchmarking standard (Monitor Compliance Framework). However, primary data demonstrate poor knowledge of the Mental Capacity Act 2005, poor knowledge of organisational systems, processes and evidence based practice applied for people with AIDD. In addition there was poor knowledge and awareness of frontline health care workers of advocacy and advocacy schemes for this group. Conclusions: A significant amount of work needs to be undertaken to improve the quality of care delivered to individuals with AIDD. An operational strategy promoting the widespread dissemination of information may not be the best approach to deliver quality care and optimal patient experience and patient advocacy. In addition, a more robust set of standards, with appropriate metrics, needs to be developed to assess organisational performance which will stand the test of professional and public scrutiny.

Keywords: advocacy, autism, health inequalities, intellectual developmental disabilities, quality of care

Procedia PDF Downloads 217