Search results for: online processing service
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9465

Search results for: online processing service

135 Numerical Investigations of Unstable Pressure Fluctuations Behavior in a Side Channel Pump

Authors: Desmond Appiah, Fan Zhang, Shouqi Yuan, Wei Xueyuan, Stephen N. Asomani

Abstract:

The side channel pump has distinctive hydraulic performance characteristics over other vane pumps because of its generation of high pressure heads in only one impeller revolution. Hence, there is soaring utilization and application in the fields of petrochemical, food processing fields, automotive and aerospace fuel pumping where high heads are required at low flows. The side channel pump is characterized by unstable flow because after fluid flows into the impeller passage, it moves into the side channel and comes back to the impeller again and then moves to the next circulation. Consequently, the flow leaves the side channel pump following a helical path. However, the pressure fluctuation exhibited in the flow greatly contributes to the unwanted noise and vibration which is associated with the flow. In this paper, a side channel pump prototype was examined thoroughly through numerical calculations based on SST k-ω turbulence model to ascertain the pressure fluctuation behavior. The pressure fluctuation intensity of the 3D unstable flow dynamics were carefully investigated under different working conditions 0.8QBEP, 1.0 QBEP and 1.2QBEP. The results showed that the pressure fluctuation distribution around the pressure side of the blade is greater than the suction side at the impeller and side channel interface (z=0) for all three operating conditions. Part-load condition 0.8QBEP recorded the highest pressure fluctuation distribution because of the high circulation velocity thus causing an intense exchanged flow between the impeller and side channel. Time and frequency domains spectra of the pressure fluctuation patterns in the impeller and the side channel were also analyzed under the best efficiency point value, QBEP using the solution from the numerical calculations. It was observed from the time-domain analysis that the pressure fluctuation characteristics in the impeller flow passage increased steadily until the flow reached the interrupter which separates low-pressure at the inflow from high pressure at the outflow. The pressure fluctuation amplitudes in the frequency domain spectrum at the different monitoring points depicted a gentle decreasing trend of the pressure amplitudes which was common among the operating conditions. The frequency domain also revealed that the main excitation frequencies occurred at 600Hz, 1200Hz, and 1800Hz and continued in the integers of the rotating shaft frequency. Also, the mass flow exchange plots indicated that the side channel pump is characterized with many vortex flows. Operating conditions 0.8QBEP, 1.0 QBEP depicted less and similar vortex flow while 1.2Q recorded many vortex flows around the inflow, middle and outflow regions. The results of the numerical calculations were finally verified experimentally. The performance characteristics curves from the simulated results showed that 0.8QBEP working condition recorded a head increase of 43.03% and efficiency decrease of 6.73% compared to 1.0QBEP. It can be concluded that for industrial applications where the high heads are mostly required, the side channel pump can be designed to operate at part-load conditions. This paper can serve as a source of information in order to optimize a reliable performance and widen the applications of the side channel pumps.

Keywords: exchanged flow, pressure fluctuation, numerical simulation, side channel pump

Procedia PDF Downloads 136
134 Portable Environmental Parameter Monitor Based on STM32

Authors: Liang Zhao, Chongquan Zhong

Abstract:

Introduction: According to statistics, people spend 80% to 90% of time indoor, so indoor air quality, either at home or in the office, greatly impacts the quality of life, health and work efficiency. Therefore, indoor air quality is very important to human activities. With the acceleration of urbanization, people are spending more time in indoor activity. The time in indoor environment, the living space, and the frequency interior decoration are all increasingly increased. However, housing decoration materials contain formaldehyde and other harmful substances, causing environmental and air quality problems, which have brought serious damage to countless families and attracted growing attention. According to World Health Organization statistics, the indoor environments in more than 30% of buildings in China are polluted by poisonous and harmful gases. Indoor pollution has caused various health problems, and these widespread public health problems can lead to respiratory diseases. Long-term inhalation of low-concentration formaldehyde would cause persistent headache, insomnia, weakness, palpitation, weight loss and vomiting, which are serious impacts on human health and safety. On the other hand, as for offices, some surveys show that good indoor air quality helps to enthuse the staff and improve the work efficiency by 2%-16%. Therefore, people need to further understand the living and working environments. There is a need for easy-to-use indoor environment monitoring instruments, with which users only have to power up and monitor the environmental parameters. The corresponding real-time data can be displayed on the screen for analysis. Environment monitoring should have the sensitive signal alarm function and send alarm when harmful gases such as formaldehyde, CO, SO2, are excessive to human body. System design: According to the monitoring requirements of various gases, temperature and humidity, we designed a portable, light, real-time and accurate monitor for various environmental parameters, including temperature, humidity, formaldehyde, methane, and CO. This monitor will generate an alarm signal when a target is beyond the standard. It can conveniently measure a variety of harmful gases and provide the alarm function. It also has the advantages of small volume, convenience to carry and use. It has a real-time display function, outputting the parameters on the LCD screen, and a real-time alarm function. Conclusions: This study is focused on the research and development of a portable parameter monitoring instrument for indoor environment. On the platform of an STM32 development board, the monitored data are collected through an external sensor. The STM32 platform is for data acquisition and processing procedures, and successfully monitors the real-time temperature, humidity, formaldehyde, CO, methane and other environmental parameters. Real-time data are displayed on the LCD screen. The system is stable and can be used in different indoor places such as family, hospital, and office. Meanwhile, the system adopts the idea of modular design and is superior in transplanting. The scheme is slightly modified and can be used similarly as the function of a monitoring system. This monitor has very high research and application values.

Keywords: indoor air quality, gas concentration detection, embedded system, sensor

Procedia PDF Downloads 255
133 Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria

Authors: Bensaid A., Mostephaoui T., Nedjai R.

Abstract:

A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent.

Keywords: land development, GIS, sand dunes, segmentation, remote sensing

Procedia PDF Downloads 109
132 In-situ Mental Health Simulation with Airline Pilot Observation of Human Factors

Authors: Mumtaz Mooncey, Alexander Jolly, Megan Fisher, Kerry Robinson, Robert Lloyd, Dave Fielding

Abstract:

Introduction: The integration of the WingFactors in-situ simulation programme has transformed the education landscape at the Whittington Health NHS Trust. To date, there have been a total of 90 simulations - 19 aimed at Paediatric trainees, including 2 Child and Adolescent Mental Health (CAMHS) scenarios. The opportunity for joint debriefs provided by clinical faculty and airline pilots, has created a new exciting avenue to explore human factors within psychiatry. Through the use of real clinical environments and primed actors; the benefits of high fidelity simulation, interdisciplinary and interprofessional learning has been highlighted. The use of in-situ simulation within Psychiatry is a newly emerging concept and its success here has been recognised by unanimously positive feedback from participants and acknowledgement through nomination for the Health Service Journal (HSJ) Award (Best Education Programme 2021). Methodology: The first CAMHS simulation featured a collapsed patient in the toilet with a ligature tied around her neck, accompanied by a distressed parent. This required participants to consider:; emergency physical management of the case, alongside helping to contain the mother and maintaining situational awareness when transferring the patient to an appropriate clinical area. The second simulation was based on a 17- year- old girl attempting to leave the ward after presenting with an overdose, posing potential risk to herself. The safe learning environment enabled participants to explore techniques to engage the young person and understand their concerns, and consider the involvement of other members of the multidisciplinary team. The scenarios were followed by an immediate ‘hot’ debrief, combining technical feedback with Human Factors feedback from uniformed airline pilots and clinicians. The importance of psychological safety was paramount, encouraging open and honest contributions from all participants. Key learning points were summarized into written documents and circulated. Findings: The in-situ simulations demonstrated the need for practical changes both in the Emergency Department and on the Paediatric ward. The presence of airline pilots provided a novel way to debrief on Human Factors. The following key themes were identified: -Team-briefing (‘Golden 5 minutes’) - Taking a few moments to establish experience, initial roles and strategies amongst the team can reduce the need for conversations in front of a distressed patient or anxious relative. -Use of checklists / guidelines - Principles associated with checklist usage (control of pace, rigor, team situational awareness), instead of reliance on accurate memory recall when under pressure. -Read-back - Immediate repetition of safety critical instructions (e.g. drug / dosage) to mitigate the risks associated with miscommunication. -Distraction management - Balancing the risk of losing a team member to manage a distressed relative, versus it impacting on the care of the young person. -Task allocation - The value of the implementation of ‘The 5A’s’ (Availability, Address, Allocate, Ask, Advise), for effective task allocation. Conclusion: 100% of participants have requested more simulation training. Involvement of airline pilots has led to a shift in hospital culture, bringing to the forefront the value of Human Factors focused training and multidisciplinary simulation. This has been of significant value in not only physical health, but also mental health simulation.

Keywords: human factors, in-situ simulation, inter-professional, multidisciplinary

Procedia PDF Downloads 107
131 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing

Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto

Abstract:

In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.

Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration

Procedia PDF Downloads 246
130 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 159
129 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model

Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson

Abstract:

The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.

Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania

Procedia PDF Downloads 104
128 Biomimicked Nano-Structured Coating Elaboration by Soft Chemistry Route for Self-Cleaning and Antibacterial Uses

Authors: Elodie Niemiec, Philippe Champagne, Jean-Francois Blach, Philippe Moreau, Anthony Thuault, Arnaud Tricoteaux

Abstract:

Hygiene of equipment in contact with users is an important issue in the railroad industry. The numerous cleanings to eliminate bacteria and dirt cost a lot. Besides, mechanical solicitations on contact parts are observed daily. It should be interesting to elaborate on a self-cleaning and antibacterial coating with sufficient adhesion and good resistance against mechanical and chemical solicitations. Thus, a Hauts-de-France and Maubeuge Val-de-Sambre conurbation authority co-financed Ph.D. thesis has been set up since October 2017 based on anterior studies carried by the Laboratory of Ceramic Materials and Processing. To accomplish this task, a soft chemical route has been implemented to bring a lotus effect on metallic substrates. It involves nanometric liquid zinc oxide synthesis under 100°C. The originality here consists in a variation of surface texturing by modification of the synthesis time of the species in solution. This helps to adjust wettability. Nanostructured zinc oxide has been chosen because of the inherent photocatalytic effect, which can activate organic substance degradation. Two methods of heating have been compared: conventional and microwave assistance. Tested subtracts are made of stainless steel to conform to transport uses. Substrate preparation was the first step of this protocol: a meticulous cleaning of the samples is applied. The main goal of the elaboration protocol is to fix enough zinc-based seeds to make them grow during the next step as desired (nanorod shaped). To improve this adhesion, a silica gel has been formulated and optimized to ensure chemical bonding between substrate and zinc seeds. The last step consists of deposing a wide carbonated organosilane to improve the superhydrophobic property of the coating. The quasi-proportionality between the reaction time and the nanorod length will be demonstrated. Water Contact (superior to 150°) and Roll-off Angle at different steps of the process will be presented. The antibacterial effect has been proved with Escherichia Coli, Staphylococcus Aureus, and Bacillus Subtilis. The mortality rate is found to be four times superior to a non-treated substrate. Photocatalytic experiences were carried out from different dyed solutions in contact with treated samples under UV irradiation. Spectroscopic measurements allow to determinate times of degradation according to the zinc quantity available on the surface. The final coating obtained is, therefore, not a monolayer but rather a set of amorphous/crystalline/amorphous layers that have been characterized by spectroscopic ellipsometry. We will show that the thickness of the nanostructured oxide layer depends essentially on the synthesis time set in the hydrothermal growth step. A green, easy-to-process and control coating with self-cleaning and antibacterial properties has been synthesized with a satisfying surface structuration.

Keywords: antibacterial, biomimetism, soft-chemistry, zinc oxide

Procedia PDF Downloads 142
127 Development of a Bead Based Fully Automated Mutiplex Tool to Simultaneously Diagnose FIV, FeLV and FIP/FCoV

Authors: Andreas Latz, Daniela Heinz, Fatima Hashemi, Melek Baygül

Abstract:

Introduction: Feline leukemia virus (FeLV), feline immunodeficiency virus (FIV), and feline coronavirus (FCoV) are serious infectious diseases affecting cats worldwide. Transmission of these viruses occurs primarily through close contact with infected cats (via saliva, nasal secretions, faeces, etc.). FeLV, FIV, and FCoV infections can occur in combination and are expressed in similar clinical symptoms. Diagnosis can therefore be challenging: Symptoms are variable and often non-specific. Sick cats show very similar clinical symptoms: apathy, anorexia, fever, immunodeficiency syndrome, anemia, etc. Sample volume for small companion animals for diagnostic purposes can be challenging to collect. In addition, multiplex diagnosis of diseases can contribute to an easier, cheaper, and faster workflow in the lab as well as to the better differential diagnosis of diseases. For this reason, we wanted to develop a new diagnostic tool that utilizes less sample volume, reagents, and consumables than multiplesingleplex ELISA assays Methods: The Multiplier from Dynextechonogies (USA) has been used as platform to develop a Multiplex diagnostic tool for the detection of antibodies against FIV and FCoV/FIP and antigens for FeLV. Multiplex diagnostics. The Dynex®Multiplier®is a fully automated chemiluminescence immunoassay analyzer that significantly simplifies laboratory workflow. The Multiplier®ease-of-use reduces pre-analytical steps by combining the power of efficiently multiplexing multiple assays with the simplicity of automated microplate processing. Plastic beads have been coated with antigens for FIV and FCoV/FIP, as well as antibodies for FeLV. Feline blood samples are incubated with the beads. Read out of results is performed via chemiluminescence Results: Bead coating was optimized for each individual antigen or capture antibody and then combined in the multiplex diagnostic tool. HRP: Antibody conjugates for FIV and FCoV antibodies, as well as detection antibodies for FeLV antigen, have been adjusted and mixed. 3 individual prototyple batches of the assay have been produced. We analyzed for each disease 50 well defined positive and negative samples. Results show an excellent diagnostic performance of the simultaneous detection of antibodies or antigens against these feline diseases in a fully automated system. A 100% concordance with singleplex methods like ELISA or IFA can be observed. Intra- and Inter-Assays showed a high precision of the test with CV values below 10% for each individual bead. Accelerated stability testing indicate a shelf life of at least 1 year. Conclusion: The new tool can be used for multiplex diagnostics of the most important feline infectious diseases. Only a very small sample volume is required. Fully automation results in a very convenient and fast method for diagnosing animal diseases.With its large specimen capacity to process over 576 samples per 8-hours shift and provide up to 3,456 results, very high laboratory productivity and reagent savings can be achieved.

Keywords: Multiplex, FIV, FeLV, FCoV, FIP

Procedia PDF Downloads 104
126 An Innovation Decision Process View in an Adoption of Total Laboratory Automation

Authors: Chia-Jung Chen, Yu-Chi Hsu, June-Dong Lin, Kun-Chen Chan, Chieh-Tien Wang, Li-Ching Wu, Chung-Feng Liu

Abstract:

With fast advances in healthcare technology, various total laboratory automation (TLA) processes have been proposed. However, adopting TLA needs quite high funding. This study explores an early adoption experience by Taiwan’s large-scale hospital group, the Chimei Hospital Group (CMG), which owns three branch hospitals (Yongkang, Liouying and Chiali, in order by service scale), based on the five stages of Everett Rogers’ Diffusion Decision Process. 1.Knowledge stage: Over the years, two weaknesses exists in laboratory department of CMG: 1) only a few examination categories (e.g., sugar testing and HbA1c) can now be completed and reported within a day during an outpatient clinical visit; 2) the Yongkang Hospital laboratory space is dispersed across three buildings, resulting in duplicated investment in analysis instruments and inconvenient artificial specimen transportation. Thus, the senior management of the department raised a crucial question, was it time to process the redesign of the laboratory department? 2.Persuasion stage: At the end of 2013, Yongkang Hospital’s new building and restructuring project created a great opportunity for the redesign of the laboratory department. However, not all laboratory colleagues had the consensus for change. Thus, the top managers arranged a series of benchmark visits to stimulate colleagues into being aware of and accepting TLA. Later, the director of the department proposed a formal report to the top management of CMG with the results of the benchmark visits, preliminary feasibility analysis, potential benefits and so on. 3.Decision stage: This TLA suggestion was well-supported by the top management of CMG and, finally, they made a decision to carry out the project with an instrument-leasing strategy. After the announcement of a request for proposal and several vendor briefings, CMG confirmed their laboratory automation architecture and finally completed the contracts. At the same time, a cross-department project team was formed and the laboratory department assigned a section leader to the National Taiwan University Hospital for one month of relevant training. 4.Implementation stage: During the implementation, the project team called for regular meetings to review the results of the operations and to offer an immediate response to the adjustment. The main project tasks included: 1) completion of the preparatory work for beginning the automation procedures; 2) ensuring information security and privacy protection; 3) formulating automated examination process protocols; 4) evaluating the performance of new instruments and the instrument connectivity; 5)ensuring good integration with hospital information systems (HIS)/laboratory information systems (LIS); and 6) ensuring continued compliance with ISO 15189 certification. 5.Confirmation stage: In short, the core process changes include: 1) cancellation of signature seals on the specimen tubes; 2) transfer of daily examination reports to a data warehouse; 3) routine pre-admission blood drawing and formal inpatient morning blood drawing can be incorporated into an automatically-prepared tube mechanism. The study summarizes below the continuous improvement orientations: (1) Flexible reference range set-up for new instruments in LIS. (2) Restructure of the specimen category. (3) Continuous review and improvements to the examination process. (4) Whether installing the tube (specimen) delivery tracks need further evaluation.

Keywords: innovation decision process, total laboratory automation, health care

Procedia PDF Downloads 419
125 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology

Authors: Amarendar Reddy Addula

Abstract:

Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.

Keywords: artificial intelligence, ethics & human rights issues, laws, international laws

Procedia PDF Downloads 94
124 Interpretable Deep Learning Models for Medical Condition Identification

Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji

Abstract:

Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.

Keywords: deep learning, interpretability, attention, big data, medical conditions

Procedia PDF Downloads 91
123 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 226
122 Effectiveness of Participatory Ergonomic Education on Pain Due to Work Related Musculoskeletal Disorders in Food Processing Industrial Workers

Authors: Salima Bijapuri, Shweta Bhatbolan, Sejalben Patel

Abstract:

Ergonomics concerns the fitting of the environment and the equipment to the worker. Ergonomic principles can be employed in different dimensions of the industrial sector. Participation of all the stakeholders is the key to the formulation of a multifaceted and comprehensive approach to lessen the burden of occupational hazards. Taking responsibility for one’s own work activities by acquiring sufficient knowledge and potential to influence the practices and outcomes is the basis of participatory ergonomics and even hastens the process to identify workplace hazards. The study was aimed to check how participatory ergonomics can be effective in the management of work-related musculoskeletal disorders. Method: A mega kitchen was identified in a twin city of Karnataka, India. Consent was taken, and the screening of workers was done using observation methods. Kitchen work was structured to include different tasks, which included preparation, cooking, distributing, and serving food, packing food to be delivered to schools, dishwashing, cleaning and maintenance of kitchen and equipment, and receiving and storing raw material. Total 100 workers attended the education session on participatory ergonomics and its role in implementing the correct ergonomic practices, thus preventing WRMSDs. Demographic details and baseline data on related musculoskeletal pain and discomfort were collected using the Nordic pain questionnaire and VAS score pre- and post-study. Monthly visits were made, and the education sessions were reiterated on each visit, thus reminding, correcting, and problem-solving of each worker. After 9 months with a total of 4 such education session, the post education data was collected. The software SPSS 20 was used to analyse the collected data. Results: The majority of them (78%), depending on the availability and feasibility, participated in the intervention workshops were arranged four times. The average age of the participants was 39 years. The percentage of female participants was 79.49%, and 20.51% of participants comprised of males. The Nordic Musculoskeletal Questionnaire (NMQ) showed that knee pain was the most commonly reported complaint (62%) from the last 12 months with a mean VAS of 6.27, followed by low back pain. Post intervention, the mean VAS Score was reduced significantly to 2.38. The comparison of pre-post scores was made using Wilcoxon matched pairs test. Upon enquiring, it was found that, the participants learned the importance of applying ergonomics at their workplace which inturn was beneficial for them to handle any problems arising at their workplace on their own with self confidence. Conclusion: The participatory ergonomics proved effective with workers of mega kitchen, and it is a feasible and practical approach. The advantage of the given study area was that it had a sophisticated and ergonomically designed workstation; thus it was the lack of education and practical knowledge to use these stations was of utmost need. There was a significant reduction in VAS scores with the implementation of changes in the working style, and the knowledge of ergonomics helped to decrease physical load and improve musculoskeletal health.

Keywords: ergonomic awareness session, mega kitchen, participatory ergonomics, work related musculoskeletal disorders

Procedia PDF Downloads 138
121 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 164
120 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU

Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais

Abstract:

Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.

Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking

Procedia PDF Downloads 34
119 Wind Tunnel Tests on Ground-Mounted and Roof-Mounted Photovoltaic Array Systems

Authors: Chao-Yang Huang, Rwey-Hua Cherng, Chung-Lin Fu, Yuan-Lung Lo

Abstract:

Solar energy is one of the replaceable choices to reduce the CO2 emission produced by conventional power plants in the modern society. As an island which is frequently visited by strong typhoons and earthquakes, it is an urgent issue for Taiwan to make an effort in revising the local regulations to strengthen the safety design of photovoltaic systems. Currently, the Taiwanese code for wind resistant design of structures does not have a clear explanation on photovoltaic systems, especially when the systems are arranged in arrayed format. Furthermore, when the arrayed photovoltaic system is mounted on the rooftop, the approaching flow is significantly altered by the building and led to different pressure pattern in the different area of the photovoltaic system. In this study, L-shape arrayed photovoltaic system is mounted on the ground of the wind tunnel and then mounted on the building rooftop. The system is consisted of 60 PV models. Each panel model is equivalent to a full size of 3.0 m in depth and 10.0 m in length. Six pressure taps are installed on the upper surface of the panel model and the other six are on the bottom surface to measure the net pressures. Wind attack angle is varied from 0° to 360° in a 10° interval for the worst concern due to wind direction. The sampling rate of the pressure scanning system is set as high enough to precisely estimate the peak pressure and at least 20 samples are recorded for good ensemble average stability. Each sample is equivalent to 10-minute time length in full scale. All the scale factors, including timescale, length scale, and velocity scale, are properly verified by similarity rules in low wind speed wind tunnel environment. The purpose of L-shape arrayed system is for the understanding the pressure characteristics at the corner area. Extreme value analysis is applied to obtain the design pressure coefficient for each net pressure. The commonly utilized Cook-and-Mayne coefficient, 78%, is set to the target non-exceedance probability for design pressure coefficients under Gumbel distribution. Best linear unbiased estimator method is utilized for the Gumbel parameter identification. Careful time moving averaging method is also concerned in data processing. Results show that when the arrayed photovoltaic system is mounted on the ground, the first row of the panels reveals stronger positive pressure than that mounted on the rooftop. Due to the flow separation occurring at the building edge, the first row of the panels on the rooftop is most in negative pressures; the last row, on the other hand, shows positive pressures because of the flow reattachment. Different areas also have different pressure patterns, which corresponds well to the regulations in ASCE7-16 describing the area division for design values. Several minor observations are found according to parametric studies, such as rooftop edge effect, parapet effect, building aspect effect, row interval effect, and so on. General comments are then made for the proposal of regulation revision in Taiwanese code.

Keywords: aerodynamic force coefficient, ground-mounted, roof-mounted, wind tunnel test, photovoltaic

Procedia PDF Downloads 138
118 Comparing Radiographic Detection of Simulated Syndesmosis Instability Using Standard 2D Fluoroscopy Versus 3D Cone-Beam Computed Tomography

Authors: Diane Ghanem, Arjun Gupta, Rohan Vijayan, Ali Uneri, Babar Shafiq

Abstract:

Introduction: Ankle sprains and fractures often result in syndesmosis injuries. Unstable syndesmotic injuries result from relative motion between the distal ends of the tibia and fibula, anatomic juncture which should otherwise be rigid, and warrant operative management. Clinical and radiological evaluations of intraoperative syndesmosis stability remain a challenging task as traditional 2D fluoroscopy is limited to a uniplanar translational displacement. The purpose of this pilot cadaveric study is to compare the 2D fluoroscopy and 3D cone beam computed tomography (CBCT) stress-induced syndesmosis displacements. Methods: Three fresh-frozen lower legs underwent 2D fluoroscopy and 3D CIOS CBCT to measure syndesmosis position before dissection. Syndesmotic injury was simulated by resecting the (1) anterior inferior tibiofibular ligament (AITFL), the (2) posterior inferior tibiofibular ligament (PITFL) and the inferior transverse ligament (ITL) simultaneously, followed by the (3) interosseous membrane (IOM). Manual external rotation and Cotton stress test were performed after each of the three resections and 2D and 3D images were acquired. Relevant 2D and 3D parameters included the tibiofibular overlap (TFO), tibiofibular clear space (TCS), relative rotation of the fibula, and anterior-posterior (AP) and medial-lateral (ML) translations of the fibula relative to the tibia. Parameters were measured by two independent observers. Inter-rater reliability was assessed by intraclass correlation coefficient (ICC) to determine measurement precision. Results: Significant mismatches were found in the trends between the 2D and 3D measurements when assessing for TFO, TCS and AP translation across the different resection states. Using 3D CBCT, TFO was inversely proportional to the number of resected ligaments while TCS was directly proportional to the latter across all cadavers and ‘resection + stress’ states. Using 2D fluoroscopy, this trend was not respected under the Cotton stress test. 3D AP translation did not show a reliable trend whereas 2D AP translation of the fibula was positive under the Cotton stress test and negative under the external rotation. 3D relative rotation of the fibula, assessed using the Tang et al. ratio method and Beisemann et al. angular method, suggested slight overall internal rotation with complete resection of the ligaments, with a change < 2mm - threshold which corresponds to the commonly used buffer to account for physiologic laxity as per clinical judgment of the surgeon. Excellent agreement (>0.90) was found between the two independent observers for each of the parameters in both 2D and 3D (overall ICC 0.9968, 95% CI 0.995 - 0.999). Conclusions: The 3D CIOS CBCT appears to reliably depict the trend in TFO and TCS. This might be due to the additional detection of relevant rotational malpositions of the fibula in comparison to the standard 2D fluoroscopy which is limited to a single plane translation. A better understanding of 3D imaging may help surgeons identify the precise measurements planes needed to achieve better syndesmosis repair.

Keywords: 2D fluoroscopy, 3D computed tomography, image processing, syndesmosis injury

Procedia PDF Downloads 70
117 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic

Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink

Abstract:

Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.

Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction

Procedia PDF Downloads 161
116 Soybean Lecithin Based Reverse Micellar Extraction of Pectinase from Synthetic Solution

Authors: Sivananth Murugesan, I. Regupathi, B. Vishwas Prabhu, Ankit Devatwal, Vishnu Sivan Pillai

Abstract:

Pectinase is an important enzyme which has a wide range of applications including textile processing and bioscouring of cotton fibers, coffee and tea fermentation, purification of plant viruses, oil extraction etc. Selective separation and purification of pectinase from fermentation broth and recover the enzyme form process stream for reuse are cost consuming process in most of the enzyme based industries. It is difficult to identify a suitable medium to enhance enzyme activity and retain its enzyme characteristics during such processes. The cost effective, selective separation of enzymes through the modified Liquid-liquid extraction is of current research interest worldwide. Reverse micellar extraction, globally acclaimed Liquid-liquid extraction technique is well known for its separation and purification of solutes from the feed which offers higher solute specificity and partitioning, ease of operation and recycling of extractants used. Surfactant concentrations above critical micelle concentration to an apolar solvent form micelles and addition of micellar phase to water in turn forms reverse micelles or water-in-oil emulsions. Since, electrostatic interaction plays a major role in the separation/purification of solutes using reverse micelles. These interaction parameters can be altered with the change in pH, addition of cosolvent, surfactant and electrolyte and non-electrolyte. Even though many chemical based commercial surfactant had been utilized for this purpose, the biosurfactants are more suitable for the purification of enzymes which are used in food application. The present work focused on the partitioning of pectinase from the synthetic aqueous solution within the reverse micelle phase formed by a biosurfactant, Soybean Lecithin dissolved in chloroform. The critical micelle concentration of soybean lecithin/chloroform solution was identified through refractive index and density measurements. Effect of surfactant concentrations above and below the critical micelle concentration was considered to study its effect on enzyme activity, enzyme partitioning within the reverse micelle phase. The effect of pH and electrolyte salts on the partitioning behavior was studied by varying the system pH and concentration of different salts during forward and back extraction steps. It was observed that lower concentrations of soybean lecithin enhanced the enzyme activity within the water core of the reverse micelle with maximizing extraction efficiency. The maximum yield of pectinase of 85% with a partitioning coefficient of 5.7 was achieved at 4.8 pH during forward extraction and 88% yield with a partitioning coefficient of 7.1 was observed during backward extraction at a pH value of 5.0. However, addition of salt decreased the enzyme activity and especially at higher salt concentrations enzyme activity declined drastically during both forward and back extraction steps. The results proved that reverse micelles formed by Soybean Lecithin and chloroform may be used for the extraction of pectinase from aqueous solution. Further, the reverse micelles can be considered as nanoreactors to enhance enzyme activity and maximum utilization of substrate at optimized conditions, which are paving a way to process intensification and scale-down.

Keywords: pectinase, reverse micelles, soybean lecithin, selective partitioning

Procedia PDF Downloads 372
115 Quality of Care for the Maternal Complications at Selected Primary and Secondary Health Facilities of Bangladesh: Lessons Learned from a Formative Research

Authors: Mohiuddin Ahsanul Kabir Chowdhury, Nafisa Lira Huq, Afroza Khanom, Rafiqul Islam, Abdullah Nurus Salam Khan, Farhana Karim, Nabila Zaka, Shams El Arifeen, Sk. Masum Billah

Abstract:

After having astounding achievements in reducing maternal mortality and achieving the target for Millennium Development Goal (MDG) 5, the Government of Bangladesh has set new target to reduce Maternal Mortality Ratio (MMR) to 70 per 100,000 live births aligning with targets of Sustainable Development Goals (SDGs). Aversion of deaths from maternal complication by ensuring quality health care could be an important path to accelerate the rate of reduction of MMR. This formative research was aimed at exploring the provision of quality maternal health services at different level of health facilities. The study was conducted in 1 district hospital (DH) and 4 Upazila health complexes (UHC) of Kurigram district of Bangladesh, utilizing both quantitative and qualitative research methods. We conducted 14 key informant interviews with facility managers and 20 in-depth interviews with health care providers and support staff. Besides, we observed 387 normal deliveries from which we found 17 cases of post partum haemorrhage (PPH) and 2 cases of eclampsia during the data collection period extended from July-September 2016. The quantitative data were analyzed by using descriptive statistics, and the qualitative component underwent thematic analysis with the broad themes of facility readiness for maternal complication management, and management of complications. Inadequacy in human resources has been identified as the most important bottleneck to provide quality care to manage maternal complications. The DH had a particular paucity of human resources in medical officer cadre where about 61% posts were unfilled. On the other hand, in the UHCs the positions mostly empty were obstetricians (75%, paediatricians (75%), staff nurses (65%), and anaesthetists (100%). The workload on the existing staff is increased because of the persistence of vacant posts. Unavailability of anesthetists and consultants does not permit the health care providers (HCP) of lower cadres to perform emergency operative procedures and forces them to refer the patients although referral system is not well organized in rural Bangladesh. Insufficient bed capacity, inadequate training, shortage of emergency medicines etc. are other hindrance factors for facility readiness. Among the 387 observed delivery case, 17 (4.4%) were identified as PPH cases, and only 2 cases were found as eclampsia/pre-eclampsia. The majority of the patients were treated with uterine message (16 out of 17, 94.1%) and injectable Oxytocin (14 out of 17, 82.4%). The providers of DH mentioned that they can manage the PPH because of having provision for diagnostic and blood transfusion services, although not as 24/7 services. Regarding management of eclampsia/pre-eclampsia, HCPs provided Diazepam, MgSO4, and other anti-hypertensives. The UHCs did not have MgSO4 at stock even, and one facility manager admitted that they treat eclampsia with Diazepam only. The nurses of the UHCs were found to be afraid to handle eclampsia cases. The upcoming interventions must ensure refresher training of service providers, continuous availability of essential medicine and equipment needed for complication management, availability of skilled health workforce, availability of functioning blood transfusion unit and pairing of consultants and anaesthetists to reach the newly set targets altogether.

Keywords: Bangladesh, health facilities, maternal complications, quality of care

Procedia PDF Downloads 235
114 The Role of Oral and Intestinal Microbiota in European Badgers

Authors: Emma J. Dale, Christina D. Buesching, Kevin R. Theis, David W. Macdonald

Abstract:

This study investigates the oral and intestinal microbiomes of wild-living European badgers (Meles meles) and will relate inter-individual differences to social contact networks, somatic and reproductive fitness, varying susceptibility to bovine tuberculous (bTB) and to the olfactory advertisement. Badgers are an interesting model for this research, as they have great variation in body condition, despite living in complex social networks and having access to the same resources. This variation in somatic fitness, in turn, affects breeding success, particularly in females. We postulate that microbiota have a central role to play in determining the successfulness of an individual. Our preliminary results, characterising the microbiota of individual badgers, indicate unique compositions of microbiota communities within social groups of badgers. This basal information will inform further questions related to the extent microbiota influence fitness. Hitherto, the potential role of microbiota has not been considered in determining host condition, but also other key fitness variables, namely; communication and resistance to disease. Badgers deposit their faeces in communal latrines, which play an important role in olfactory communication. Odour profiles of anal and subcaudal gland secretions are highly individual-specific and encode information about group-membership and fitness-relevant parameters, and their chemical composition is strongly dependent on symbiotic microbiota. As badgers sniff/ lick (using their Vomeronasal organ) and over-mark faecal deposits of conspecifics, these microbial communities can be expected to vary with social contact networks. However, this is particularly important in the context of bTB, where badgers are assumed to transmit bTB to cattle as well as conspecifics. Interestingly, we have found that some individuals are more susceptible to bTB than are others. As acquired immunity and thus potential susceptibility to infectious diseases are known to depend also on symbiotic microbiota in other members of the mustelids, a role of particularly oral microbiota can currently not be ruled out as a potential explanation for inter-individual differences in infection susceptibility of bTB in badgers. Tri annually badgers are caught in the context of a long-term population study that began in 1987. As all badgers receive an individual tattoo upon first capture, age, natal as well as previous and current social group-membership and other life history parameters are known for all animals. Swabs (subcaudal ‘scent gland’, anal, genital, nose, mouth and ear) and fecal samples will be taken from all individuals, stored at -80oC until processing. Microbial samples will be processed and identified at Wayne State University’s Theis (Host-Microbe Interactions) Lab, using High Throughput Sequencing (16S rRNA-encoding gene amplification and sequencing). Acknowledgments: Gas-Chromatography/ Mass-spectrometry (in the context of olfactory communication) analyses will be performed through an established collaboration with Dr. Veronica Tinnesand at Telemark University, Norway.

Keywords: communication, energetics, fitness, free-ranging animals, immunology

Procedia PDF Downloads 187
113 Nano-Enabling Technical Carbon Fabrics to Achieve Improved Through Thickness Electrical Conductivity in Carbon Fiber Reinforced Composites

Authors: Angelos Evangelou, Katerina Loizou, Loukas Koutsokeras, Orestes Marangos, Giorgos Constantinides, Stylianos Yiatros, Katerina Sofocleous, Vasileios Drakonakis

Abstract:

Owing to their outstanding strength to weight properties, carbon fiber reinforced polymer (CFRPs) composites have attracted significant attention finding use in various fields (sports, automotive, transportation, etc.). The current momentum indicates that there is an increasing demand for their employment in high value bespoke applications such as avionics and electronic casings, damage sensing structures, EMI (electromagnetic interference) structures that dictate the use of materials with increased electrical conductivity both in-plane and through the thickness. Several efforts by research groups have focused on enhancing the through-thickness electrical conductivity of FRPs, in an attempt to combine the intrinsically high relative strengths exhibited with improved z-axis electrical response as well. However, only a limited number of studies deal with printing of nano-enhanced polymer inks to produce a pattern on dry fabric level that could be used to fabricate CFRPs with improved through thickness electrical conductivity. The present study investigates the employment of screen-printing process on technical dry fabrics using nano-reinforced polymer-based inks to achieve the required through thickness conductivity, opening new pathways for the application of fiber reinforced composites in niche products. Commercially available inks and in-house prepared inks reinforced with electrically conductive nanoparticles are employed, printed in different patterns. The aim of the present study is to investigate both the effect of the nanoparticle concentration as well as the droplet patterns (diameter, inter-droplet distance and coverage) to optimize printing for the desired level of conductivity enhancement in the lamina level. The electrical conductivity is measured initially at ink level to pinpoint the optimum concentrations to be employed using a “four-probe” configuration. Upon printing of the different patterns, the coverage of the dry fabric area is assessed along with the permeability of the resulting dry fabrics, in alignment with the fabrication of CFRPs that requires adequate wetting by the epoxy matrix. Results demonstrated increased electrical conductivities of the printed droplets, with increase of the conductivity from the benchmark value of 0.1 S/M to between 8 and 10 S/m. Printability of dense and dispersed patterns has exhibited promising results in terms of increasing the z-axis conductivity without inhibiting the penetration of the epoxy matrix at the processing stage of fiber reinforced composites. The high value and niche prospect of the resulting applications that can stem from CFRPs with increased through thickness electrical conductivities highlights the potential of the presented endeavor, signifying screen printing as the process to to nano-enable z-axis electrical conductivity in composite laminas. This work was co-funded by the European Regional Development Fund and the Republic of Cyprus through the Research and Innovation Foundation (Project: ENTERPRISES/0618/0013).

Keywords: CFRPs, conductivity, nano-reinforcement, screen-printing

Procedia PDF Downloads 151
112 Performance of CALPUFF Dispersion Model for Investigation the Dispersion of the Pollutants Emitted from an Industrial Complex, Daura Refinery, to an Urban Area in Baghdad

Authors: Ramiz M. Shubbar, Dong In Lee, Hatem A. Gzar, Arthur S. Rood

Abstract:

Air pollution is one of the biggest environmental problems in Baghdad, Iraq. The Daura refinery located nearest the center of Baghdad, represents the largest industrial area, which transmits enormous amounts of pollutants, therefore study the gaseous pollutants and particulate matter are very important to the environment and the health of the workers in refinery and the people whom leaving in areas around the refinery. Actually, some studies investigated the studied area before, but it depended on the basic Gaussian equation in a simple computer programs, however, that kind of work at that time is very useful and important, but during the last two decades new largest production units were added to the Daura refinery such as, PU_3 (Power unit_3 (Boiler 11&12)), CDU_1 (Crude Distillation unit_70000 barrel_1), and CDU_2 (Crude Distillation unit_70000 barrel_2). Therefore, it is necessary to use new advanced model to study air pollution at the region for the new current years, and calculation the monthly emission rate of pollutants through actual amounts of fuel which consumed in production unit, this may be lead to accurate concentration values of pollutants and the behavior of dispersion or transport in study area. In this study to the best of author’s knowledge CALPUFF model was used and examined for first time in Iraq. CALPUFF is an advanced non-steady-state meteorological and air quality modeling system, was applied to investigate the pollutants concentration of SO2, NO2, CO, and PM1-10μm, at areas adjacent to Daura refinery which located in the center of Baghdad in Iraq. The CALPUFF modeling system includes three main components: CALMET is a diagnostic 3-dimensional meteorological model, CALPUFF (an air quality dispersion model), CALPOST is a post processing package, and an extensive set of preprocessing programs produced to interface the model to standard routinely available meteorological and geophysical datasets. The targets of this work are modeling and simulation the four pollutants (SO2, NO2, CO, and PM1-10μm) which emitted from Daura refinery within one year. Emission rates of these pollutants were calculated for twelve units includes thirty plants, and 35 stacks by using monthly average of the fuel amount consumption at this production units. Assess the performance of CALPUFF model in this study and detect if it is appropriate and get out predictions of good accuracy compared with available pollutants observation. CALPUFF model was investigated at three stability classes (stable, neutral, and unstable) to indicate the dispersion of the pollutants within deferent meteorological conditions. The simulation of the CALPUFF model showed the deferent kind of dispersion of these pollutants in this region depends on the stability conditions and the environment of the study area, monthly, and annual averages of pollutants were applied to view the dispersion of pollutants in the contour maps. High values of pollutants were noticed in this area, therefore this study recommends to more investigate and analyze of the pollutants, reducing the emission rate of pollutants by using modern techniques and natural gas, increasing the stack height of units, and increasing the exit gas velocity from stacks.

Keywords: CALPUFF, daura refinery, Iraq, pollutants

Procedia PDF Downloads 197
111 Ectopic Osteoinduction of Porous Composite Scaffolds Reinforced with Graphene Oxide and Hydroxyapatite Gradient Density

Authors: G. M. Vlasceanu, H. Iovu, E. Vasile, M. Ionita

Abstract:

Herein, the synthesis and characterization of chitosan-gelatin highly porous scaffold reinforced with graphene oxide, and hydroxyapatite (HAp), crosslinked with genipin was targeted. In tissue engineering, chitosan and gelatin are two of the most robust biopolymers with wide applicability due to intrinsic biocompatibility, biodegradability, low antigenicity properties, affordability, and ease of processing. HAp, per its exceptional activity in tuning cell-matrix interactions, is acknowledged for its capability of sustaining cellular proliferation by promoting bone-like native micro-media for cell adjustment. Genipin is regarded as a top class cross-linker, while graphene oxide (GO) is viewed as one of the most performant and versatile fillers. The composites with natural bone HAp/biopolymer ratio were obtained by cascading sonochemical treatments, followed by uncomplicated casting methods and by freeze-drying. Their structure was characterized by Fourier Transform Infrared Spectroscopy and X-ray Diffraction, while overall morphology was investigated by Scanning Electron Microscopy (SEM) and micro-Computer Tomography (µ-CT). Ensuing that, in vitro enzyme degradation was performed to detect the most promising compositions for the development of in vivo assays. Suitable GO dispersion was ascertained within the biopolymer mix as nanolayers specific signals lack in both FTIR and XRD spectra, and the specific spectral features of the polymers persisted with GO load enhancement. Overall, correlations between the GO induced material structuration, crystallinity variations, and chemical interaction of the compounds can be correlated with the physical features and bioactivity of each composite formulation. Moreover, the HAp distribution within follows an auspicious density gradient tuned for hybrid osseous/cartilage matter architectures, which were mirrored in the mice model tests. Hence, the synthesis route of a natural polymer blend/hydroxyapatite-graphene oxide composite material is anticipated to emerge as influential formulation in bone tissue engineering. Acknowledgement: This work was supported by the project 'Work-based learning systems using entrepreneurship grants for doctoral and post-doctoral students' (Sisteme de invatare bazate pe munca prin burse antreprenor pentru doctoranzi si postdoctoranzi) - SIMBA, SMIS code 124705 and by a grant of the National Authority for Scientific Research and Innovation, Operational Program Competitiveness Axis 1 - Section E, Program co-financed from European Regional Development Fund 'Investments for your future' under the project number 154/25.11.2016, P_37_221/2015. The nano-CT experiments were possible due to European Regional Development Fund through Competitiveness Operational Program 2014-2020, Priority axis 1, ID P_36_611, MySMIS code 107066, INOVABIOMED.

Keywords: biopolymer blend, ectopic osteoinduction, graphene oxide composite, hydroxyapatite

Procedia PDF Downloads 104
110 Digital Health During a Pandemic: Critical Analysis of the COVID-19 Contact Tracing Apps

Authors: Mohanad Elemary, Imose Itua, Rajeswari B. Matam

Abstract:

Virologists and public health experts have been predicting potential pandemics from coronaviruses for decades. The viruses which caused the SARS and MERS pandemics and the Nipah virus led to many lost lives, but still, the COVID-19 pandemic caused by the SARS-CoV2 virus surprised many scientific communities, experts, and governments with its ease of transmission and its pathogenicity. Governments of various countries reacted by locking down entire populations to their homes to combat the devastation caused by the virus, which led to a loss of livelihood and economic hardship to many individuals and organizations. To revive national economies and support their citizens in resuming their lives, governments focused on the development and use of contact tracing apps as a digital way to track and trace exposure. Google and Apple introduced the Exposure Notification Systems (ENS) framework. Independent organizations and countries also developed different frameworks for contact tracing apps. The efficiency, popularity, and adoption rate of these various apps have been different across countries. In this paper, we present a critical analysis of the different contact tracing apps with respect to their efficiency, adoption rate and general perception, and the governmental strategies and policies, which led to the development of the applications. When it comes to the European countries, each of them followed an individualistic approach to the same problem resulting in different realizations of a similarly functioning application with differing results of use and acceptance. The study conducted an extensive review of existing literature, policies, and reports across multiple disciplines, from which a framework was developed and then validated through interviews with six key stakeholders in the field, including founders and executives in digital health startups and corporates as well as experts from international organizations like The World Health Organization. A framework of best practices and tactics is the result of this research. The framework looks at three main questions regarding the contact tracing apps; how to develop them, how to deploy them, and how to regulate them. The findings are based on the best practices applied by governments across multiple countries, the mistakes they made, and the best practices applied in similar situations in the business world. The findings include multiple strategies when it comes to the development milestone regarding establishing frameworks for cooperation with the private sector and how to design the features and user experience of the app for a transparent, effective, and rapidly adaptable app. For the deployment section, several tactics were discussed regarding communication messages, marketing campaigns, persuasive psychology, and the initial deployment scale strategies. The paper also discusses the data privacy dilemma and how to build for a more sustainable system of health-related data processing and utilization. This is done through principles-based regulations specific for health data to allow for its avail for the public good. This framework offers insights into strategies and tactics that could be implemented as protocols for future public health crises and emergencies whether global or regional.

Keywords: contact tracing apps, COVID-19, digital health applications, exposure notification system

Procedia PDF Downloads 135
109 Additional Opportunities of Forensic Medical Identification of Dead Bodies of Unkown Persons

Authors: Saule Mussabekova

Abstract:

A number of chemical elements widely presented in the nature is seldom met in people and vice versa. This is a peculiarity of accumulation of elements in the body, and their selective use regardless of widely changed parameters of external environment. Microelemental identification of human hair and particularly dead body is a new step in the development of modern forensic medicine which needs reliable criteria while identifying the person. In the condition of technology-related pressing of large industrial cities for many years and specific for each region multiple-factor toxic effect from many industrial enterprises it’s important to assess actuality and the role of researches of human hair while assessing degree of deposition with specific pollution. Hair is highly sensitive biological indicator and allows to assess ecological situation, to perform regionalism of large territories of geological and chemical methods. Besides, monitoring of concentrations of chemical elements in the regions of Kazakhstan gives opportunity to use these data while performing forensic medical identification of dead bodies of unknown persons. Methods based on identification of chemical composition of hair with further computer processing allowed to compare received data with average values for the sex, age, and to reveal causally significant deviations. It gives an opportunity preliminary to suppose the region of residence of the person, having concentrated actions of policy for search of people who are unaccounted for. It also allows to perform purposeful legal actions for its further identification having created more optimal and strictly individual scheme of personal identity. Hair is the most suitable material for forensic researches as it has such advances as long term storage properties with no time limitations and specific equipment. Besides, quantitative analysis of micro elements is well correlated with level of pollution of the environment, reflects professional diseases and with pinpoint accuracy helps not only to diagnose region of temporary residence of the person but to establish regions of his migration as well. Peculiarities of elemental composition of human hair have been established regardless of age and sex of persons residing on definite territories of Kazakhstan. Data regarding average content of 29 chemical elements in hair of population in different regions of Kazakhstan have been systemized. Coefficients of concentration of studies elements in hair relative to average values around the region have been calculated for each region. Groups of regions with specific spectrum of elements have been emphasized; these elements are accumulated in hair in quantities exceeding average indexes. Our results have showed significant differences in concentrations of chemical elements for studies groups and showed that population of Kazakhstan is exposed to different toxic substances. It depends on emissions to atmosphere from industrial enterprises dominating in each separate region. Performed researches have showed that obtained elemental composition of human hair residing in different regions of Kazakhstan reflects technogenic spectrum of elements.

Keywords: analysis of elemental composition of hair, forensic medical research of hair, identification of unknown dead bodies, microelements

Procedia PDF Downloads 142
108 A Chemical Perspective to Nineteenth-Century Female Medical Pioneers: Utilizing Mass Spectrometry in the Museum Space

Authors: Elizabeth R. LaFave, Grayson Sink, Anna Vassallo, Samantha Mills, Eli G. Hvastkovs

Abstract:

Throughout history and into modern times, the continuation of male influence over female healthcare has created inadequacies in availability and access to treatments, often further limited in rural communities. The historical plight of women in healthcare can be understood by studying the advancements made by women in the field, both through their career arcs and by delving into the treatments they offer. An early example is the case of Martha Ballard (1735-1812), a midwife in New York who practiced when female practitioners were dismissed in favor of less educated male physicians, which was a well-accepted practice into the twentieth century. In order to overcome these setbacks, a strategy used by some female practitioners was to develop and market their own remedies in an attempt to better serve female patients. By highlighting the compromises and social manipulation of female entrepreneurs, in comparison with the medicines they developed and used, we can map their ability to carve a specific niche for themselves and their targeted customers. The application of modern chemical approaches in a historical context serves to enhance a variety of perspectives within the museum sphere necessary for the comprehension and understanding of the female plight in both medical care and service. In order to further examine the overall bias and scrutiny for women in the medical field, specifically those undertaking entrepreneurial roles, examples of alternative remedies from female founders will be analyzed utilizing these approaches. Modern analytical chemistry techniques, specifically mass spectrometry (MS), have been successful in offering compositional analyses for both labeled and unlabeled ingredients in old medicines. Previously, we have analyzed two forms of alternative treatment options created by male medical professionals to address lingering historical questions of purity and validity. Although primarily sugar based, both Humphreys’ Specifics and Boericke & Tafel remedies also contained unique ingredients, albeit in small quantities, with medicinal properties. Here, we applied the same methodology to study another highly politicized 19th-century debate surrounding the contribution and role of women in the medical profession through analyzing three remedies, each from a different female-led manufacturing company; Mrs. Joe Persons, Lydia Pinkham, and Winslow’s Syrups. Following MS analyses for both labeled and unlabeled ingredients, both Winslow’s and Pinkham’s remedies were similar to their male counterparts in advertisement strategy, targeted customer base, and overall composition of remedy (primarily sugar-based with small amounts of unique ingredients). In effect, these unbiased chemical assessments are used to dissect the rationality of both market and physician criticism for each individual manufacturer through assessment of authenticity, benefaction, and comparison among female entrepreneurs and their aims to enter the medical community (i.e., geographic location, market size). Our work aims to increase collaboration between STEM (Science, Technology, Engineering, Mathematics)-based fields and historical museum studies on a larger scale while also answering questions of potential bias towards females in the medical community as means of comparison to their male counterparts and in-depth historical analyses to unravel individual strategies to overcome the setback.

Keywords: nineteenth-century medicine, alternative remedies, female healthcare, chemical analyses, mass spectrometry

Procedia PDF Downloads 87
107 Artificial Intelligence Impact on the Australian Government Public Sector

Authors: Jessica Ho

Abstract:

AI has helped government, businesses and industries transform the way they do things. AI is used in automating tasks to improve decision-making and efficiency. AI is embedded in sensors and used in automation to help save time and eliminate human errors in repetitive tasks. Today, we saw the growth in AI using the collection of vast amounts of data to forecast with greater accuracy, inform decision-making, adapt to changing market conditions and offer more personalised service based on consumer habits and preferences. Government around the world share the opportunity to leverage these disruptive technologies to improve productivity while reducing costs. In addition, these intelligent solutions can also help streamline government processes to deliver more seamless and intuitive user experiences for employees and citizens. This is a critical challenge for NSW Government as we are unable to determine the risk that is brought by the unprecedented pace of adoption of AI solutions in government. Government agencies must ensure that their use of AI complies with relevant laws and regulatory requirements, including those related to data privacy and security. Furthermore, there will always be ethical concerns surrounding the use of AI, such as the potential for bias, intellectual property rights and its impact on job security. Within NSW’s public sector, agencies are already testing AI for crowd control, infrastructure management, fraud compliance, public safety, transport, and police surveillance. Citizens are also attracted to the ease of use and accessibility of AI solutions without requiring specialised technical skills. This increased accessibility also comes with balancing a higher risk and exposure to the health and safety of citizens. On the other side, public agencies struggle with keeping up with this pace while minimising risks, but the low entry cost and open-source nature of generative AI led to a rapid increase in the development of AI powered apps organically – “There is an AI for That” in Government. Other challenges include the fact that there appeared to be no legislative provisions that expressly authorise the NSW Government to use an AI to make decision. On the global stage, there were too many actors in the regulatory space, and a sovereign response is needed to minimise multiplicity and regulatory burden. Therefore, traditional corporate risk and governance framework and regulation and legislation frameworks will need to be evaluated for AI unique challenges due to their rapidly evolving nature, ethical considerations, and heightened regulatory scrutiny impacting the safety of consumers and increased risks for Government. Creating an effective, efficient NSW Government’s governance regime, adapted to the range of different approaches to the applications of AI, is not a mere matter of overcoming technical challenges. Technologies have a wide range of social effects on our surroundings and behaviours. There is compelling evidence to show that Australia's sustained social and economic advancement depends on AI's ability to spur economic growth, boost productivity, and address a wide range of societal and political issues. AI may also inflict significant damage. If such harm is not addressed, the public's confidence in this kind of innovation will be weakened. This paper suggests several AI regulatory approaches for consideration that is forward-looking and agile while simultaneously fostering innovation and human rights. The anticipated outcome is to ensure that NSW Government matches the rising levels of innovation in AI technologies with the appropriate and balanced innovation in AI governance.

Keywords: artificial inteligence, machine learning, rules, governance, government

Procedia PDF Downloads 70
106 Relationship Between Brain Entropy Patterns Estimated by Resting State fMRI and Child Behaviour

Authors: Sonia Boscenco, Zihan Wang, Euclides José de Mendoça Filho, João Paulo Hoppe, Irina Pokhvisneva, Geoffrey B.C. Hall, Michael J. Meaney, Patricia Pelufo Silveira

Abstract:

Entropy can be described as a measure of the number of states of a system, and when used in the context of physiological time-based signals, it serves as a measure of complexity. In functional connectivity data, entropy can account for the moment-to-moment variability that is neglected in traditional functional magnetic resonance imaging (fMRI) analyses. While brain fMRI resting state entropy has been associated with some pathological conditions like schizophrenia, no investigations have explored the association between brain entropy measures and individual differences in child behavior in healthy children. We describe a novel exploratory approach to evaluate brain fMRI resting state data in two child cohorts, and MAVAN (N=54, 4.5 years, 48% males) and GUSTO (N = 206, 4.5 years, 48% males) and its associations to child behavior, that can be used in future research in the context of child exposures and long-term health. Following rs-fMRI data pre-processing and Shannon entropy calculation across 32 network regions of interest to acquire 496 unique functional connections, partial correlation coefficient analysis adjusted for sex was performed to identify associations between entropy data and Strengths and Difficulties questionnaire in MAVAN and Child Behavior Checklist domains in GUSTO. Significance was set at p < 0.01, and we found eight significant associations in GUSTO. Negative associations were found between two frontoparietal regions and cerebellar posterior and oppositional defiant problems, (r = -0.212, p = 0.006) and (r = -0.200, p = 0.009). Positive associations were identified between somatic complaints and four default mode connections: salience insula (r = 0.202, p < 0.01), dorsal attention intraparietal sulcus (r = 0.231, p = 0.003), language inferior frontal gyrus (r = 0.207, p = 0.008) and language posterior superior temporal gyrus (r = 0.210, p = 0.008). Positive associations were also found between insula and frontoparietal connection and attention deficit / hyperactivity problems (r = 0.200, p < 0.01), and insula – default mode connection and pervasive developmental problems (r = 0.210, p = 0.007). In MAVAN, ten significant associations were identified. Two positive associations were found = with prosocial scores: the salience prefrontal cortex and dorsal attention connection (r = 0.474, p = 0.005) and the salience supramarginal gyrus and dorsal attention intraparietal sulcus (r = 0.447, p = 0.008). The insula and prefrontal connection were negatively associated with peer problems (r = -0.437, p < 0.01). Conduct problems were negatively associated with six separate connections, the left salience insula and right salience insula (r = -0.449, p = 0.008), left salience insula and right salience supramarginal gyrus (r = -0.512, p = 0.002), the default mode and visual network (r = -0.444, p = 0.009), dorsal attention and language network (r = -0.490, p = 0.003), and default mode and posterior parietal cortex (r = -0.546, p = 0.001). Entropy measures of resting state functional connectivity can be used to identify individual differences in brain function that are correlated with variation in behavioral problems in healthy children. Further studies applying this marker into the context of environmental exposures are warranted.

Keywords: child behaviour, functional connectivity, imaging, Shannon entropy

Procedia PDF Downloads 202