Search results for: validating
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 191

Search results for: validating

41 The Emergence of Memory at the Nanoscale

Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann

Abstract:

Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.

Keywords: memories, memdevices, memristors, nonequilibrium states

Procedia PDF Downloads 66
40 The Psychometric Properties of the Team Climate Inventory Scale: A Validation Study in Jordan’s Collectivist Society

Authors: Suhair Mereish

Abstract:

This research is aimed at examining the climate for innovation in organisations with the aim of validating the psychometric properties of the Team Climate Inventory (TCI -14) for Jordan’s collectivist society. The innovativeness of teams may be improved or obstructed by the climate within the team. Further, personal factors are considered an important element that influences the climate for innovation. Accordingly, measuring the employees' personality traits using the Big Five Inventory (BFI-44) could provide insights that aid in understanding how to improve innovation. Thus, studying the climate for innovation and its associations with personality traits is valuable, considering the insights it could offer on employee performance, job satisfaction, and well-being. Essentially, the Team Climate Inventory instrument has never been tested in Jordan’s collectivist society. Accordingly, in order to address the existing gap in the literature as a whole and, more specifically, in Jordan, it is essential to investigate its factorial structure and reliability in this particular context. It is also important to explore whether the factorial structure of the Team Climate Inventory in Jordan’s collectivist society demonstrates a similar or different structure to what has been found in individualistic ones. Lastly, examining if there are associations between the Team Climate Inventory and personality traits of Jordanian employees is pivotal. The quantitative study was carried out among Jordanian employees employed in two of the top 20 companies in Jordan, a shipping and logistics company (N=473) and a telecommunications company (N=219). To generalise the findings, this was followed by collecting data from the general population of this country (N=399). The participants completed the Team Climate Inventory. Confirmatory factor analyses and reliability tests were conducted to confirm the factorial structure, validity, and reliability of the inventory. Findings presented that the four-factor structure of the Team Climate Inventory in Jordan revealed a similar structure to the ones in Western culture. The four-factor structure has been confirmed with good fit indices and reliability values. Moreover, for climate for innovation, regression analysis identified agreeableness (positive) and neuroticism (negative) from the Big Five Inventory as significant predictors. This study will contribute to knowledge in several ways. First, by examining the reliability and factorial structure in a Jordanian collectivist context rather than a Western individualistic one. Second, by comparing the Team Climate Inventory structure in Jordan with findings for the Team Climate Inventory from Western individualistic societies. Third, by studying its relationships with personality traits in that country. Furthermore, findings from this study will assist practitioners in the field of organisational psychology and development to improve the climate for innovation for employees working in organisations in Jordan. It is also expected that the results of this research will provide recommendations to professionals in the business psychology sector regarding the characteristics of employees who hold positive and negative perceptions of the workplace climate.

Keywords: big five inventory, climate for innovation, collectivism, individualism, Jordan, team climate inventory

Procedia PDF Downloads 30
39 Differentially Expressed Genes in Atopic Dermatitis: Bioinformatics Analysis Of Pooled Microarray Gene Expression Datasets In Gene Expression Omnibus

Authors: Danna Jia, Bin Li

Abstract:

Background: Atopic dermatitis (AD) is a chronic and refractory inflammatory skin disease characterized by relapsing eczematous and pruritic skin lesions. The global prevalence of AD ranges from 1~ 20%, and its incidence rates are increasing. It affects individuals from infancy to adulthood, significantly impacting their daily lives and social activities. Despite its major health burden, the precise mechanisms underlying AD remain unknown. Understanding the genetic differences associated with AD is crucial for advancing diagnosis and targeted treatment development. This study aims to identify candidate genes of AD by using bioinformatics analysis. Methods: We conducted a comprehensive analysis of four pooled transcriptomic datasets (GSE16161, GSE32924, GSE130588, and GSE120721) obtained from the Gene Expression Omnibus (GEO) database. Differential gene expression analysis was performed using the R statistical language. The differentially expressed genes (DEGs) between AD patients and normal individuals were functionally analyzed using Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment. Furthermore, a protein-protein interaction (PPI) network was constructed to identify candidate genes. Results: Among the patient-level gene expression datasets, we identified 114 shared DEGs, consisting of 53 upregulated genes and 61 downregulated genes. Functional analysis using GO and KEGG revealed that the DEGs were mainly associated with the negative regulation of transcription from RNA polymerase II promoter, membrane-related functions, protein binding, and the Human papillomavirus infection pathway. Through the PPI network analysis, we identified eight core genes: CD44, STAT1, HMMR, AURKA, MKI67, and SMARCA4. Conclusion: This study elucidates key genes associated with AD, providing potential targets for diagnosis and treatment. The identified genes have the potential to contribute to the understanding and management of AD. The bioinformatics analysis conducted in this study offers new insights and directions for further research on AD. Future studies can focus on validating the functional roles of these genes and exploring their therapeutic potential in AD. While these findings will require further verification as achieved with experiments involving in vivo and in vitro models, these results provided some initial insights into dysfunctional inflammatory and immune responses associated with AD. Such information offers the potential to develop novel therapeutic targets for use in preventing and treating AD.

Keywords: atopic dermatitis, bioinformatics, biomarkers, genes

Procedia PDF Downloads 51
38 Radar Cross Section Modelling of Lossy Dielectrics

Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit

Abstract:

Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.

Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation

Procedia PDF Downloads 217
37 Advanced Palliative Aquatics Care Multi-Device AuBento for Symptom and Pain Management by Sensorial Integration and Electromagnetic Fields: A Preliminary Design Study

Authors: J. F. Pollo Gaspary, F. Peron Gaspary, E. M. Simão, R. Concatto Beltrame, G. Orengo de Oliveira, M. S. Ristow Ferreira, J.C. Mairesse Siluk, I. F. Minello, F. dos Santos de Oliveira

Abstract:

Background: Although palliative care policies and services have been developed, research in this area continues to lag. An integrated model of palliative care is suggested, which includes complementary and alternative services aimed at improving the well-being of patients and their families. The palliative aquatics care multi-device (AuBento) uses several electromagnetic techniques to decrease pain and promote well-being through relaxation and interaction among patients, specialists, and family members. Aim: The scope of this paper is to present a preliminary design study of a device capable of exploring the various existing theories on the biomedical application of magnetic fields. This will be achieved by standardizing clinical data collection with sensory integration, and adding new therapeutic options to develop an advanced palliative aquatics care, innovating in symptom and pain management. Methods: The research methodology was based on the Work Package Methodology for the development of projects, separating the activities into seven different Work Packages. The theoretical basis was carried out through an integrative literature review according to the specific objectives of each Work Package and provided a broad analysis, which, together with the multiplicity of proposals and the interdisciplinarity of the research team involved, generated consistent and understandable complex concepts in the biomedical application of magnetic fields for palliative care. Results: Aubento ambience was idealized with restricted electromagnetic exposure (avoiding data collection bias) and sensory integration (allowing relaxation associated with hydrotherapy, music therapy, and chromotherapy or like floating tank). This device has a multipurpose configuration enabling classic or exploratory options on the use of the biomedical application of magnetic fields at the researcher's discretion. Conclusions: Several patients in diverse therapeutic contexts may benefit from the use of magnetic fields or fluids, thus validating the stimuli to clinical research in this area. A device in controlled and multipurpose environments may contribute to standardizing research and exploring new theories. Future research may demonstrate the possible benefits of the aquatics care multi-device AuBento to improve the well-being and symptom control in palliative care patients and their families.

Keywords: advanced palliative aquatics care, magnetic field therapy, medical device, research design

Procedia PDF Downloads 102
36 Validation of an Acuity Measurement Tool for Maternity Services

Authors: Cherrie Lowe

Abstract:

The TrendCare Patient Dependency System is currently utilized by a large number of Maternity Services across Australia, New Zealand and Singapore. In 2012, 2013, and 2014 validation studies were initiated in all three countries to validate the acuity tools used for Women in Labour, and Postnatal Mothers and Babies. This paper will present the findings of the validation study. Aim: The aim of this study was to; Identify if the care hours provided by the TrendCare Acuity System was an accurate reflection of the care required by Women and Babies. Obtain evidence of changes required to acuity indicators and/or category timings to ensure the TrendCare acuity system remains reliable and valid across a range of Maternity care models in three countries. Method: A non-experimental action research methodology was used across four District Health Boards in New Zealand, two large public Australian Maternity services and a large tertiary Maternity service in Singapore. Standardized data collection forms and timing devices were used to collect Midwife contact times with Women and Babies included in the study. Rejection processes excluded samples where care was not completed/rationed. The variances between actual timed Midwife/Mother/Baby contact and actual Trend Care acuity times were identified and investigated. Results: 87.5% (18) of TrendCare acuity category timings matched the actual timings recorded for Midwifery care. 12.5% (3) of TrendCare night duty categories provided less minutes of care than the actual timings. 100% of Labour Ward TrendCare categories matched actual timings for Midwifery care. The actual times given for assistance to New Zealand independent Midwives in Labour Ward showed a significant deviation to previous studies demonstrating the need for additional time allocations in Trend Care. Conclusion: The results demonstrated the importance of regularly validating the Trend Care category timings with the care hours required, as variances to models of care and length of stay in Maternity units have increased Midwifery workloads on the night shift. The level of assistance provided by the core labour ward staff to the Independent Midwife has increased substantially. Outcomes: As a consequence of this study changes were made to the night duty TrendCare Maternity categories, additional acuity indicators developed and times for assisting independent Midwives increased. The updated TrendCare version was delivered to Maternity services in 2014.

Keywords: maternity, acuity, research, nursing workloads

Procedia PDF Downloads 344
35 Endemic Asteraceae from Mauritius Islands as Potential Phytomedicines

Authors: S.Kauroo, J. Govinden Soulange, D. Marie

Abstract:

Psiadia species from the Asteraceae are traditionally used in the folk medicine of Mauritius to treat cutaneous and bronchial infections. The present study aimed at validating the phytomedicinal properties of the selected species from the Asteraceae family, namely Psiadia arguta, Psiadia viscosa, Psiadia lithospermifolia, and Distephanus populifolius. Dried hexane, ethyl acetate, and methanol leaf extracts were studied for their antioxidant properties using the DPPH (1, 1-diphenyl-2-picryl-hydrazyl), FRAP (Ferric Reducing Ability of Plasma), and Deoxyribose assays. Antibacterial activity against human pathogenic bacteria namely Escherichia coli (ATCC 27853), Staphylococcus aureus (ATCC 29213), Enterococcus faecalis (ATCC 29212), Klebsiella pneumonia (ATCC27853), Pseudomonas aeruginosa (ATCC 27853), and Bacillus cereus (ATCC 11778) was measured using the broth microdilution assay. Qualitative phytochemical screening using standard methods revealed the presence of coumarins, tannins, leucoanthocyanins, and steroids in all the tested extracts. The measured phenolics level of the selected plant extracts varied from 24.0 to 231.6 mg GAE/g with the maximum level in methanol extracts in all four species. The highest flavonoids and proanthocyanidins content was noted in Psiadia arguta methanolic extracts with 65.7±1.8 mg QE/g and 5.1±0.0 mg CAT/g dry weight (DW) extract, respectively. The maximum free radical scavenging activity was measured in Psiadia arguta methanol and ethyl acetate extracts with IC50 11.3±0.2 and 11.6± 0.2 µg/mL, respectively and followed by Distephanus populifolius methanol extracts with an IC50 of 11.3± 0.8 µg/mL. The maximum ferric reducing antioxidant potential was noted in Psiadia lithospermifolia methanol extracts with a FRAP value of 18.8 ± 0.4 µmol Fe2+/L/g DW. The antioxidant capacity based on DPPH and Deoxyribose values were negatively related to total phenolics, flavonoid and proanthocyanidins content while the ferric reducing antioxidant potential were strongly correlated to total phenolics, flavonoid and proanthocyanidins content. All four species exhibited antimicrobial activity against the tested bacteria (both Gram-negative and Gram-positive). Interestingly, the hexane and ethyl acetate extracts of Psiadia viscosa and Psiadia lithospermifolia were more active than the control antibiotic Chloramphenicol. The Minimum inhibitory concentration (MIC) values for hexane and ethyl acetate extracts of Psiadia viscosa and Psiadia lithospermifolia against the tested bacteria ranged from (62.5 to 500 µg/ml). These findings validate the use of these tested Asteraceae in the traditional medicine of Mauritius and also highlight their pharmaceutical potential as prospective phytomedicines.

Keywords: antibacterial, antioxidant, DPPH, flavonoids, FRAP, Psiadia spp

Procedia PDF Downloads 496
34 Gendered Appartus of a Military: The Role of Military Wives in Defining Security

Authors: Taarika Singh

Abstract:

Military wives – women married to army officers have largely been recognized as mere supporters or as auxiliaries to military men rather than propagators of thought and ideologies. The military wife (and her participation) is often dismissed as 'private', 'domestic', or 'trivial' and is acknowledged, if at all, only as an (inevitable/normative) entity, seen as a natural product/outcome of militarization. It is because the military wife has come to be constructed and accepted as normative by states and militaries that women of the military are easily ‘trivialised’ and are made to appear to be socially, politically, or theoretically irrelevantand/or insignificant. This paper, using ethnography-- structured and semi-structured interviews -- makes a gendered analysis of militarization, by bringing the military wife to the forefront and placing her at the nexus of the military and state apparatus. Moving away from gendered analyses that focus on the impact of militarization on women or draw attention to the ways in which militarization has been challenged/resisted by women, the paper pays attention to the centrality of women in shaping, validating, and perpetuating militarization, patriarchal control, and gendered hierarchies. The paper will demonstrate how military wives accept and comply with patriarchy as an institutional form of social organization that extends beyond the family and kinship relations into the military as an organization of the state. The paper will draw attention to the ways in which military norms, patriarchal values, and belief systems shape the social personhood, identity, and worldview of military wives; as a consequence of which, women play a central role in upholding and reproducing social inequalities and hierarchies; in shaping social status, and power relationships amongst men and women within and outside the military. The paper will allude to the processes and ideologies via which womena) accept and reproducemen as exclusive holders of power, status, and privilege; and b) recognize international relations, politics, andmatters related to security to be male dominated arenas inviting overwhelming masculine participation. In doing so, the paper will argue that women of the military play a critical role in perpetuating and upholding gendered meanings associated with the notion of and discourse around security. The paper will illustratehow military wives accept and assume security to be inherently a gendered idea -- a masculine notion, a male dominated arena, as something granted by men. In other words, the paper will demonstrate how the militarization of the military wives and the perpetuation of militarization by military wives plays a crucial role in propagating and perpetuating security to be a masculine notion or a male dominated arena. The paper will then question the degree to which such gendered analyses can shape the broader meanings, definitions, and discourses around security, matters related to security, and security threats.

Keywords: gender, militarisation, security, women

Procedia PDF Downloads 119
33 Transforming the Education System for the Innovative Society: A Case Study

Authors: Mario Chiasson, Monique Boudreau

Abstract:

Problem statement: Innovation in education has become a central topic of discussion at various levels, including schools and scholarly literature, driven by the global technological advancements of Industry 4.0. This study aims to contribute to the ongoing dialogue by examining the role of innovation in transforming school culture through the reimagination of traditional structures. The study argues that such a transformation necessitates an understanding and experience of systems leadership. This paper presents the case of the Francophone South School District, where a transformative initiative created an innovative learning environment by engaging students, teachers, and community members collaboratively through eco-communities. Traditional barriers and structures in education were dismantled to facilitate this process. The research component of this paper focuses on the Intr’Appreneur project, a unique initiative launched by the district team in the New Brunswick, Canada to support a system-wide transformation towards progressive and innovative organizational models. Methods This study is part of a larger research project that focuses on the transformation of educational systems in six pilot schools involved in the Intr’Appreneur project. Due to COVID-19 restrictions, the project was downscaled to three schools, and virtual qualitative interviews were conducted with volunteer teachers and administrators. Data was collected from students, teachers, and principals regarding their perceptions of the new learning environment and experiences. The analysis process involved developing categories, establishing codes for emerging themes, and validating the findings. The study emphasizes the importance of system leadership in achieving successful transformation. Results: The findings demonstrate that school principals played a vital role in enabling system-wide change by fostering a dynamic, collaborative, and inclusive culture, coordinating and mobilizing community members, and serving as educational role models who facilitated active and personalized pedagogy among the teaching staff. These qualities align with the characteristics of Leadership 4.0 and are crucial for successful school system transformations. Conclusion: This paper emphasizes the importance of systems leadership in driving educational transformations that extend beyond pedagogical and technological advancements. The research underscores the potential impact of such a leadership approach on teaching, learning, and leading processes in Education 4.0.

Keywords: leadership, system transformation, innovation, innovative learning environment, Education 4.0, system leadership

Procedia PDF Downloads 35
32 Validating the Cerebral Palsy Quality of Life for Children (CPQOL-Child) Questionnaire for Use in Sri Lanka

Authors: Shyamani Hettiarachchi, Gopi Kitnasamy

Abstract:

Background: The potentially high level of physical need and dependency experienced by children with cerebral palsy could affect the quality of life (QOL) of the child, the caregiver and his/her family. Poor QOL in children with cerebral palsy is associated with the parent-child relationship, limited opportunities for social participation, limited access to healthcare services, psychological well-being and the child's physical functioning. Given that children experiencing disabilities have little access to remedial support with an inequitable service across districts in Sri Lanka, and given the impact of culture and societal stigma, there may be differing viewpoints across respondents. Objectives: The aim of this study was to evaluate the psychometric properties of the Tamil version of the Cerebral Palsy Quality of Life for Children (CPQOL-Child) Questionnaire. Design: An instrument development and validation study. Methods: Forward and backward translations of the CPQOL-Child were undertaken by a team comprised of a physiotherapist, speech and language therapist and two linguists for the primary caregiver form and the child self-report form. As part of a pilot phase, the Tamil version of the CPQOL was completed by 45 primary caregivers with children with cerebral palsy and 15 children with cerebral palsy (GMFCS level 3-4). In addition, the primary caregivers commented on the process of filling in the questionnaire. The psychometric properties of test-retest reliability, internal consistency and construct validity were undertaken. Results: The test-retest reliability and internal consistency were high. A significant association (p < 0.001) was found between limited motor skills and poor QOL. The Cronbach's alpha for the whole questionnaire was at 0.95.Similarities and divergences were found between the two groups of respondents. The child respondents identified limited motor skills as associated with physical well-being and autonomy. Akin to this, the primary caregivers associated the severity of motor function with limitations of physical well-being and autonomy. The trend observed was that QOL was not related to the level of impairment but connected to environmental factors by the child respondents. In addition to this, the main concern among primary caregivers about the child's future and on the child's lack of independence was not fully captured by the QOL questionnaire employed. Conclusions: Although the initial results of the CPQOL questionnaire show high test-retest reliability and internal consistency of the instrument, it does not fully reflect the socio-cultural realities and primary concerns of the caregivers. The current findings highlight the need to take child and caregiver perceptions of QOL into account in clinical practice and research. It strongly indicates the need for culture-specific measures of QOL.

Keywords: cerebral palsy, CPQOL, culture, quality of life

Procedia PDF Downloads 326
31 Adding a Degree of Freedom to Opinion Dynamics Models

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 100
30 Validating the Micro-Dynamic Rule in Opinion Dynamics Models

Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.

Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule

Procedia PDF Downloads 129
29 An Inquiry into the Usage of Complex Systems Models to Examine the Effects of the Agent Interaction in a Political Economic Environment

Authors: Ujjwall Sai Sunder Uppuluri

Abstract:

Group theory is a powerful tool that researchers can use to provide a structural foundation for their Agent Based Models. These Agent Based models are argued by this paper to be the future of the Social Science Disciplines. More specifically, researchers can use them to apply evolutionary theory to the study of complex social systems. This paper illustrates one such example of how theoretically an Agent Based Model can be formulated from the application of Group Theory, Systems Dynamics, and Evolutionary Biology to analyze the strategies pursued by states to mitigate risk and maximize usage of resources to achieve the objective of economic growth. This example can be applied to other social phenomena and this makes group theory so useful to the analysis of complex systems, because the theory provides the mathematical formulaic proof for validating the complex system models that researchers build and this will be discussed by the paper. The aim of this research, is to also provide researchers with a framework that can be used to model political entities such as states on a 3-dimensional plane. The x-axis representing resources (tangible and intangible) available to them, y the risks, and z the objective. There also exist other states with different constraints pursuing different strategies to climb the mountain. This mountain’s environment is made up of risks the state faces and resource endowments. This mountain is also layered in the sense that it has multiple peaks that must be overcome to reach the tallest peak. A state that sticks to a single strategy or pursues a strategy that is not conducive to the climbing of that specific peak it has reached is not able to continue advancement. To overcome the obstacle in the state’s path, it must innovate. Based on the definition of a group, we can categorize each state as being its own group. Each state is a closed system, one which is made up of micro level agents who have their own vectors and pursue strategies (actions) to achieve some sub objectives. The state also has an identity, the inverse being anarchy and/or inaction. Finally, the agents making up a state interact with each other through competition and collaboration to mitigate risks and achieve sub objectives that fall within the primary objective. Thus, researchers can categorize the state as an organism that reflects the sum of the output of the interactions pursued by agents at the micro level. When states compete, they employ a strategy and that state which has the better strategy (reflected by the strategies pursued by her parts) is able to out-compete her counterpart to acquire some resource, mitigate some risk or fulfil some objective. This paper will attempt to illustrate how group theory combined with evolutionary theory and systems dynamics can allow researchers to model the long run development, evolution, and growth of political entities through the use of a bottom up approach.

Keywords: complex systems, evolutionary theory, group theory, international political economy

Procedia PDF Downloads 109
28 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 109
27 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows

Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican

Abstract:

This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.

Keywords: laboratory-process, optimization, pathology, computer simulation, workflow

Procedia PDF Downloads 249
26 Production, Characterization and In vitro Evaluation of [223Ra]RaCl2 Nanomicelles for Targeted Alpha Therapy of Osteosarcoma

Authors: Yang Yang, Luciana Magalhães Rebelo Alencar, Martha Sahylí Ortega Pijeira, Beatriz da Silva Batista, Alefe Roger Silva França, Erick Rafael Dias Rates, Ruana Cardoso Lima, Sara Gemini-Piperni, Ralph Santos-Oliveira

Abstract:

Radium-²²³ dichloride ([²²³Rₐ]RₐCl₂) is an alpha particle-emitting radiopharmaceutical currently approved for the treatment of patients with castration-resistant prostate cancer, symptomatic bone metastases, and no known visceral metastatic disease. [²²³Rₐ]RₐCl₂ is bone-seeking calcium mimetic that bonds into the newly formed bone stroma, especially osteoblastic or sclerotic metastases, killing the tumor cells by inducing DNA breaks in a potent and localized manner. Nonetheless, the successful therapy of osteosarcoma as primary bone tumors is still a challenge. Nanomicelles are colloidal nanosystems widely used in drug development to improve blood circulation time, bioavailability, and specificity of therapeutic agents, among other applications. In addition, the enhanced permeability and retention effect of the nanosystems, and the renal excretion of the nanomicelles reported in most cases so far, are very attractive to achieve selective and increased accumulation in tumor site as well as to increase the safety of [²²³Rₐ]RₐCl₂ in the clinical routine. In the present work, [²²³Rₐ]RₐCl₂ nanomicelles were produced, characterized, in vitro evaluated, and compared with pure [²²³Rₐ]RₐCl2 solution using SAOS2 osteosarcoma cells. The [²²³Rₐ]RₐCl₂ nanomicelles were prepared using the amphiphilic copolymer Pluronic F127. The dynamic light scattering analysis of freshly produced [²²³Rₐ]RₐCl₂ nanomicelles demonstrated a mean size of 129.4 nm with a polydispersity index (PDI) of 0.303. After one week stored in the refrigerator, the mean size of the [²²³Rₐ]RₐCl₂ nanomicelles increased to 169.4 with a PDI of 0.381. Atomic force microscopy analysis of [223Rₐ]RₐCl₂ nanomicelles exhibited spherical structures whose heights reach 1 µm, suggesting the filling of 127-Pluronic nanomicelles with [²²³Rₐ]RₐCl₂. The viability assay with [²²³Rₐ]RₐCl₂ nanomicelles displayed a dose-dependent response as it was observed using pure [²²³Rₐ]RₐCl2. However, at the same dose, [²²³Rₐ]RₐCl₂ nanomicelles were 20% higher efficient in killing SAOS2 cells when compared with pure [²²³Rₐ]RₐCl₂. These findings demonstrated the effectiveness of the nanosystem validating the application of nanotechnology in targeted alpha therapy with [²²³Ra]RₐCl₂. In addition, the [²²³Rₐ]RaCl₂nanomicelles may be decorated and incorporated with a great variety of agents and compounds (e.g., monoclonal antibodies, aptamers, peptides) to overcome the limited use of [²²³Ra]RₐCl₂.

Keywords: nanomicelles, osteosarcoma, radium dichloride, targeted alpha therapy

Procedia PDF Downloads 94
25 Predicting Susceptibility to Coronary Artery Disease using Single Nucleotide Polymorphisms with a Large-Scale Data Extraction from PubMed and Validation in an Asian Population Subset

Authors: K. H. Reeta, Bhavana Prasher, Mitali Mukerji, Dhwani Dholakia, Sangeeta Khanna, Archana Vats, Shivam Pandey, Sandeep Seth, Subir Kumar Maulik

Abstract:

Introduction Research has demonstrated a connection between coronary artery disease (CAD) and genetics. We did a deep literature mining using both bioinformatics and manual efforts to identify the susceptible polymorphisms in coronary artery disease. Further, the study sought to validate these findings in an Asian population. Methodology In first phase, we used an automated pipeline which organizes and presents structured information on SNPs, Population and Diseases. The information was obtained by applying Natural Language Processing (NLP) techniques to approximately 28 million PubMed abstracts. To accomplish this, we utilized Python scripts to extract and curate disease-related data, filter out false positives, and categorize them into 24 hierarchical groups using named Entity Recognition (NER) algorithms. From the extensive research conducted, a total of 466 unique PubMed Identifiers (PMIDs) and 694 Single Nucleotide Polymorphisms (SNPs) related to coronary artery disease (CAD) were identified. To refine the selection process, a thorough manual examination of all the studies was carried out. Specifically, SNPs that demonstrated susceptibility to CAD and exhibited a positive Odds Ratio (OR) were selected, and a final pool of 324 SNPs was compiled. The next phase involved validating the identified SNPs in DNA samples of 96 CAD patients and 37 healthy controls from Indian population using Global Screening Array. ResultsThe results exhibited out of 324, only 108 SNPs were expressed, further 4 SNPs showed significant difference of minor allele frequency in cases and controls. These were rs187238 of IL-18 gene, rs731236 of VDR gene, rs11556218 of IL16 gene and rs5882 of CETP gene. Prior researches have reported association of these SNPs with various pathways like endothelial damage, susceptibility of vitamin D receptor (VDR) polymorphisms, and reduction of HDL-cholesterol levels, ultimately leading to the development of CAD. Among these, only rs731236 had been studied in Indian population and that too in diabetes and vitamin D deficiency. For the first time, these SNPs were reported to be associated with CAD in Indian population. Conclusion: This pool of 324 SNP s is a unique kind of resource that can help to uncover risk associations in CAD. Here, we validated in Indian population. Further, validation in different populations may offer valuable insights and contribute to the development of a screening tool and may help in enabling the implementation of primary prevention strategies targeted at the vulnerable population.

Keywords: coronary artery disease, single nucleotide polymorphism, susceptible SNP, bioinformatics

Procedia PDF Downloads 45
24 Validating Chronic Kidney Disease-Specific Risk Factors for Cardiovascular Events Using National Data: A Retrospective Cohort Study of the Nationwide Inpatient Sample

Authors: Fidelis E. Uwumiro, Chimaobi O. Nwevo, Favour O. Osemwota, Victory O. Okpujie, Emeka S. Obi, Omamuyovbi F. Nwoagbe, Ejiroghene Tejere, Joycelyn Adjei-Mensah, Christopher N. Ekeh, Charles T. Ogbodo

Abstract:

Several risk factors associated with cardiovascular events have been identified as specific to Chronic Kidney Disease (CKD). This study endeavors to validate these CKD-specific risk factors using up-to-date national-level data, thereby highlighting the crucial significance of confirming the validity and generalizability of findings obtained from previous studies conducted on smaller patient populations. The study utilized the nationwide inpatient sample database to identify adult hospitalizations for CKD from 2016 to 2020, employing validated ICD-10-CM/PCS codes. A comprehensive literature review was conducted to identify both traditional and CKD-specific risk factors associated with cardiovascular events. Risk factors and cardiovascular events were defined using a combination of ICD-10-CM/PCS codes and statistical commands. Only risk factors with specific ICD-10 codes and hospitalizations with complete data were included in the study. Cardiovascular events of interest included cardiac arrhythmias, sudden cardiac death, acute heart failure, and acute coronary syndromes. Univariate and multivariate regression models were employed to evaluate the association between chronic kidney disease-specific risk factors and cardiovascular events while adjusting for the impact of traditional CV risk factors such as old age, hypertension, diabetes, hypercholesterolemia, inactivity, and smoking. A total of 690,375 hospitalizations for CKD were included in the analysis. The study population was predominantly male (375,564, 54.4%) and primarily received care at urban teaching hospitals (512,258, 74.2%). The mean age of the study population was 61 years (SD 0.1), and 86.7% (598,555) had a CCI of 3 or more. At least one traditional risk factor for CV events was present in 84.1% of all hospitalizations (580,605), while 65.4% (451,505) included at least one CKD-specific risk factor for CV events. The incidence of CV events in the study was as follows: acute coronary syndromes (41,422; 6%), sudden cardiac death (13,807; 2%), heart failure (404,560; 58.6%), and cardiac arrhythmias (124,267; 18%). 91.7% (113,912) of all cardiac arrhythmias were atrial fibrillations. Significant odds of cardiovascular events on multivariate analyses included: malnutrition (aOR: 1.09; 95% CI: 1.06–1.13; p<0.001), post-dialytic hypotension (aOR: 1.34; 95% CI: 1.26–1.42; p<0.001), thrombophilia (aOR: 1.46; 95% CI: 1.29–1.65; p<0.001), sleep disorder (aOR: 1.17; 95% CI: 1.09–1.25; p<0.001), and post-renal transplant immunosuppressive therapy (aOR: 1.39; 95% CI: 1.26–1.53; p<0.001). The study validated malnutrition, post-dialytic hypotension, thrombophilia, sleep disorders, and post-renal transplant immunosuppressive therapy, highlighting their association with increased risk for cardiovascular events in CKD patients. No significant association was observed between uremic syndrome, hyperhomocysteinemia, hyperuricemia, hypertriglyceridemia, leptin levels, carnitine deficiency, anemia, and the odds of experiencing cardiovascular events.

Keywords: cardiovascular events, cardiovascular risk factors in CKD, chronic kidney disease, nationwide inpatient sample

Procedia PDF Downloads 42
23 Acute Antihyperglycemic Activity of a Selected Medicinal Plant Extract Mixture in Streptozotocin Induced Diabetic Rats

Authors: D. S. N. K. Liyanagamage, V. Karunaratne, A. P. Attanayake, S. Jayasinghe

Abstract:

Diabetes mellitus is an ever increasing global health problem which causes disability and untimely death. Current treatments using synthetic drugs have caused numerous adverse effects as well as complications, leading research efforts in search of safe and effective alternative treatments for diabetes mellitus. Even though there are traditional Ayurvedic remedies which are effective, due to a lack of scientific exploration, they have not been proven to be beneficial for common use. Hence the aim of this study is to evaluate the traditional remedy made of mixture of plant components, namely leaves of Murraya koenigii L. Spreng (Rutaceae), cloves of Allium sativum L. (Amaryllidaceae), fruits of Garcinia queasita Pierre (Clusiaceae) and seeds of Piper nigrum L. (Piperaceae) used for the treatment of diabetes. We report herein the preliminary results for the in vivo study of the anti-hyperglycaemic activity of the extracts of the above plant mixture in Wistar rats. A mixture made out of equal weights (100 g) of the above mentioned medicinal plant parts were extracted into cold water, hot water (3 h reflux) and water: acetone mixture (1:1) separately. Male wistar rats were divided into six groups that received different treatments. Diabetes mellitus was induced by intraperitoneal administration of streptozotocin at a dose of 70 mg/ kg in male Wistar rats in group two, three, four, five and six. Group one (N=6) served as the healthy untreated and group two (N=6) served as diabetic untreated control and both groups received distilled water. Cold water, hot water, and water: acetone plant extracts were orally administered in diabetic rats in groups three, four and five, respectively at different doses of 0.5 g/kg (n=6), 1.0 g/kg(n=6) and 1.5 g/kg(n=6) for each group. Glibenclamide (0.5 mg/kg) was administered to diabetic rats in group six (N=6) served as the positive control. The acute anti-hyperglycemic effect was evaluated over a four hour period using the total area under the curve (TAUC) method. The results of the test group of rats were compared with the diabetic untreated control. The TAUC of healthy and diabetic rats were 23.16 ±2.5 mmol/L.h and 58.31±3.0 mmol/L.h, respectively. A significant dose dependent improvement in acute anti-hyperglycaemic activity was observed in water: acetone extract (25%), hot water extract ( 20 %), and cold water extract (15 %) compared to the diabetic untreated control rats in terms of glucose tolerance (P < 0.05). Therefore, the results suggest that the plant mixture has a potent antihyperglycemic effect and thus validating their used in Ayurvedic medicine for the management of diabetes mellitus. Future studies will be focused on the determination of the long term in vivo anti-diabetic mechanisms and isolation of bioactive compounds responsible for the anti-diabetic activity.

Keywords: acute antihyperglycemic activity, herbal mixture, oral glucose tolerance test, Sri Lankan medicinal plant extracts

Procedia PDF Downloads 152
22 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy

Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu

Abstract:

The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.

Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis

Procedia PDF Downloads 43
21 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry

Authors: C. A. Barros, Ana P. Barroso

Abstract:

Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.

Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis

Procedia PDF Downloads 180
20 Impact Location From Instrumented Mouthguard Kinematic Data In Rugby

Authors: Jazim Sohail, Filipe Teixeira-Dias

Abstract:

Mild traumatic brain injury (mTBI) within non-helmeted contact sports is a growing concern due to the serious risk of potential injury. Extensive research is being conducted looking into head kinematics in non-helmeted contact sports utilizing instrumented mouthguards that allow researchers to record accelerations and velocities of the head during and after an impact. This does not, however, allow the location of the impact on the head, and its magnitude and orientation, to be determined. This research proposes and validates two methods to quantify impact locations from instrumented mouthguard kinematic data, one using rigid body dynamics, the other utilizing machine learning. The rigid body dynamics technique focuses on establishing and matching moments from Euler’s and torque equations in order to find the impact location on the head. The methodology is validated with impact data collected from a lab test with the dummy head fitted with an instrumented mouthguard. Additionally, a Hybrid III Dummy head finite element model was utilized to create synthetic kinematic data sets for impacts from varying locations to validate the impact location algorithm. The algorithm calculates accurate impact locations; however, it will require preprocessing of live data, which is currently being done by cross-referencing data timestamps to video footage. The machine learning technique focuses on eliminating the preprocessing aspect by establishing trends within time-series signals from instrumented mouthguards to determine the impact location on the head. An unsupervised learning technique is used to cluster together impacts within similar regions from an entire time-series signal. The kinematic signals established from mouthguards are converted to the frequency domain before using a clustering algorithm to cluster together similar signals within a time series that may span the length of a game. Impacts are clustered within predetermined location bins. The same Hybrid III Dummy finite element model is used to create impacts that closely replicate on-field impacts in order to create synthetic time-series datasets consisting of impacts in varying locations. These time-series data sets are used to validate the machine learning technique. The rigid body dynamics technique provides a good method to establish accurate impact location of impact signals that have already been labeled as true impacts and filtered out of the entire time series. However, the machine learning technique provides a method that can be implemented with long time series signal data but will provide impact location within predetermined regions on the head. Additionally, the machine learning technique can be used to eliminate false impacts captured by sensors saving additional time for data scientists using instrumented mouthguard kinematic data as validating true impacts with video footage would not be required.

Keywords: head impacts, impact location, instrumented mouthguard, machine learning, mTBI

Procedia PDF Downloads 173
19 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language

Authors: Ghazal Faraj, András Micsik

Abstract:

Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.

Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping

Procedia PDF Downloads 227
18 Nutrition Budgets in Uganda: Research to Inform Implementation

Authors: Alexis D'Agostino, Amanda Pomeroy

Abstract:

Background: Resource availability is essential to effective implementation of national nutrition policies. To this end, the SPRING Project has collected and analyzed budget data from government ministries in Uganda, international donors, and other nutrition implementers to provide data for the first time on what funding is actually allocated to implement nutrition activities named in the national nutrition plan. Methodology: USAID’s SPRING Project used the Uganda Nutrition Action Plan (UNAP) as the starting point for budget analysis. Thorough desk reviews of public budgets from government, donors, and NGOs were mapped to activities named in the UNAP and validated by key informants (KIs) across the stakeholder groups. By relying on nationally-recognized and locally-created documents, SPRING provided a familiar basis for discussions to increase credibility and local ownership of findings. Among other things, the KIs validated the amount, source, and type (specific or sensitive) of funding. When only high-level budget data were available, KIs provided rough estimates of the percentage of allocations that were actually nutrition-relevant, allowing creation of confidence intervals around some funding estimates. Results: After validating data and narrowing in on estimates of funding to nutrition-relevant programming, researchers applied a formula to estimate overall nutrition allocations. In line with guidance by the SUN Movement and its three-step process, nutrition-specific funding was counted at 100% of its allocation amount, while nutrition sensitive funding was counted at 25%. The vast majority of nutrition funding in Uganda is off-budget, with over 90 percent of all nutrition funding is provided outside of the government system. Overall allocations are split nearly evenly between nutrition-specific and –sensitive activities. In FY 2013/14, the two-year study’s baseline year, on- and off-budget funding for nutrition was estimated to be around 60 million USD. While the 60 million USD allocations compare favorably to the 66 million USD estimate of the cost of the UNAP, not all activities are sufficiently funded. Those activities with a focus on behavior change were the most underfunded. In addition, accompanying qualitative research suggested that donor funding for nutrition activities may shift government funding into other areas of work, making it difficult to estimate the sustainability of current nutrition investments.Conclusions: Beyond providing figures, these estimates can be used together with the qualitative results of the study to explain how and why these amounts were allocated for particular activities and not others, examine the negotiation process that occurred, and suggest options for improving the flow of finances to UNAP activities for the remainder of the policy tenure. By the end of the PBN study, several years of nutrition budget estimates will be available to compare changes in funding over time. Halfway through SPRING’s work, there is evidence that country stakeholders have begun to feel ownership over the ultimate findings and some ministries are requesting increased technical assistance in nutrition budgeting. Ultimately, these data can be used within organization to advocate for more and improved nutrition funding and to improve targeting of nutrition allocations.

Keywords: budget, nutrition, financing, scale-up

Procedia PDF Downloads 408
17 Sand Production Modelled with Darcy Fluid Flow Using Discrete Element Method

Authors: M. N. Nwodo, Y. P. Cheng, N. H. Minh

Abstract:

In the process of recovering oil in weak sandstone formations, the strength of sandstones around the wellbore is weakened due to the increase of effective stress/load from the completion activities around the cavity. The weakened and de-bonded sandstone may be eroded away by the produced fluid, which is termed sand production. It is one of the major trending subjects in the petroleum industry because of its significant negative impacts, as well as some observed positive impacts. For efficient sand management therefore, there has been need for a reliable study tool to understand the mechanism of sanding. One method of studying sand production is the use of the widely recognized Discrete Element Method (DEM), Particle Flow Code (PFC3D) which represents sands as granular individual elements bonded together at contact points. However, there is limited knowledge of the particle-scale behavior of the weak sandstone, and the parameters that affect sanding. This paper aims to investigate the reliability of using PFC3D and a simple Darcy flow in understanding the sand production behavior of a weak sandstone. An isotropic tri-axial test on a weak oil sandstone sample was first simulated at a confining stress of 1MPa to calibrate and validate the parallel bond models of PFC3D using a 10m height and 10m diameter solid cylindrical model. The effect of the confining stress on the number of bonds failure was studied using this cylindrical model. With the calibrated data and sample material properties obtained from the tri-axial test, simulations without and with fluid flow were carried out to check on the effect of Darcy flow on bonds failure using the same model geometry. The fluid flow network comprised of every four particles connected with tetrahedral flow pipes with a central pore or flow domain. Parametric studies included the effects of confining stress, and fluid pressure; as well as validating flow rate – permeability relationship to verify Darcy’s fluid flow law. The effect of model size scaling on sanding was also investigated using 4m height, 2m diameter model. The parallel bond model successfully calibrated the sample’s strength of 4.4MPa, showing a sharp peak strength before strain-softening, similar to the behavior of real cemented sandstones. There seems to be an exponential increasing relationship for the bigger model, but a curvilinear shape for the smaller model. The presence of the Darcy flow induced tensile forces and increased the number of broken bonds. For the parametric studies, flow rate has a linear relationship with permeability at constant pressure head. The higher the fluid flow pressure, the higher the number of broken bonds/sanding. The DEM PFC3D is a promising tool to studying the micromechanical behavior of cemented sandstones.

Keywords: discrete element method, fluid flow, parametric study, sand production/bonds failure

Procedia PDF Downloads 293
16 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section

Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert

Abstract:

Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.

Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics

Procedia PDF Downloads 226
15 Monitoring the Production of Large Composite Structures Using Dielectric Tool Embedded Capacitors

Authors: Galatee Levadoux, Trevor Benson, Chris Worrall

Abstract:

With the rise of public awareness on climate change comes an increasing demand for renewable sources of energy. As a result, the wind power sector is striving to manufacture longer, more efficient and reliable wind turbine blades. Currently, one of the leading causes of blade failure in service is improper cure of the resin during manufacture. The infusion process creating the main part of the composite blade structure remains a critical step that is yet to be monitored in real time. This stage consists of a viscous resin being drawn into a mould under vacuum, then undergoing a curing reaction until solidification. Successful infusion assumes the resin fills all the voids and cures completely. Given that the electrical properties of the resin change significantly during its solidification, both the filling of the mould and the curing reaction are susceptible to be followed using dieletrometry. However, industrially available dielectrics sensors are currently too small to monitor the entire surface of a wind turbine blade. The aim of the present research project is to scale up the dielectric sensor technology and develop a device able to monitor the manufacturing process of large composite structures, assessing the conformity of the blade before it even comes out of the mould. An array of flat copper wires acting as electrodes are embedded in a polymer matrix fixed in an infusion mould. A multi-frequency analysis from 1 Hz to 10 kHz is performed during the filling of the mould with an epoxy resin and the hardening of the said resin. By following the variations of the complex admittance Y*, the filling of the mould and curing process are monitored. Results are compared to numerical simulations of the sensor in order to validate a virtual cure-monitoring system. The results obtained by drawing glycerol on top of the copper sensor displayed a linear relation between the wetted length of the sensor and the complex admittance measured. Drawing epoxy resin on top of the sensor and letting it cure at room temperature for 24 hours has provided characteristic curves obtained when conventional interdigitated sensor are used to follow the same reaction. The response from the developed sensor has shown the different stages of the polymerization of the resin, validating the geometry of the prototype. The model created and analysed using COMSOL has shown that the dielectric cure process can be simulated, so long as a sufficient time and temperature dependent material properties can be determined. The model can be used to help design larger sensors suitable for use with full-sized blades. The preliminary results obtained with the sensor prototype indicate that the infusion and curing process of an epoxy resin can be followed with the chosen configuration on a scale of several decimeters. Further work is to be devoted to studying the influence of the sensor geometry and the infusion parameters on the results obtained. Ultimately, the aim is to develop a larger scale sensor able to monitor the flow and cure of large composite panels industrially.

Keywords: composite manufacture, dieletrometry, epoxy, resin infusion, wind turbine blades

Procedia PDF Downloads 133
14 Hygrothermal Interactions and Energy Consumption in Cold Climate Hospitals: Integrating Numerical Analysis and Case Studies to Investigate and Analyze the Impact of Air Leakage and Vapor Retarding

Authors: Amir E. Amirzadeh, Richard K. Strand

Abstract:

Moisture-induced problems are a significant concern for building owners, architects, construction managers, and building engineers, as they can have substantial impacts on building enclosures' durability and performance. Computational analyses, such as hygrothermal and thermal analysis, can provide valuable information and demonstrate the expected relative performance of building enclosure systems but are not grounded in absolute certainty. This paper evaluates the hygrothermal performance of common enclosure systems in hospitals in cold climates. The study aims to investigate the impact of exterior wall systems on hospitals, focusing on factors such as durability, construction deficiencies, and energy performance. The study primarily examines the impact of air leakage and vapor retarding layers relative to energy consumption. While these factors have been studied in residential and commercial buildings, there is a lack of information on their impact on hospitals in a holistic context. The study integrates various research studies and professional experience in hospital building design to achieve its objective. The methodology involves surveying and observing exterior wall assemblies, reviewing common exterior wall assemblies and details used in hospital construction, performing simulations and numerical analyses of various variables, validating the model and mechanism using available data from industry and academia, visualizing the outcomes of the analysis, and developing a mechanism to demonstrate the relative performance of exterior wall systems for hospitals under specific conditions. The data sources include case studies from real-world projects and peer-reviewed articles, industry standards, and practices. This research intends to integrate and analyze the in-situ and as-designed performance and durability of building enclosure assemblies with numerical analysis. The study's primary objective is to provide a clear and precise roadmap to better visualize and comprehend the correlation between the durability and performance of common exterior wall systems used in the construction of hospitals and the energy consumption of these buildings under certain static and dynamic conditions. As the construction of new hospitals and renovation of existing ones have grown over the last few years, it is crucial to understand the effect of poor detailing or construction deficiencies on building enclosure systems' performance and durability in healthcare buildings. This study aims to assist stakeholders involved in hospital design, construction, and maintenance in selecting durable and high-performing wall systems. It highlights the importance of early design evaluation, regular quality control during the construction of hospitals, and understanding the potential impacts of improper and inconsistent maintenance and operation practices on occupants, owner, building enclosure systems, and Heating, Ventilation, and Air Conditioning (HVAC) systems, even if they are designed to meet the project requirements.

Keywords: hygrothermal analysis, building enclosure, hospitals, energy efficiency, optimization and visualization, uncertainty and decision making

Procedia PDF Downloads 40
13 Creation of a Test Machine for the Scientific Investigation of Chain Shot

Authors: Mark McGuire, Eric Shannon, John Parmigiani

Abstract:

Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.

Keywords: chain shot, safety, testing, timber harvesters

Procedia PDF Downloads 124
12 Long-Term Exposure Assessments for Cooking Workers Exposed to Polycyclic Aromatic Hydrocarbons and Aldehydes Containing in Cooking Fumes

Authors: Chun-Yu Chen, Kua-Rong Wu, Yu-Cheng Chen, Perng-Jy Tsai

Abstract:

Cooking fumes are known containing polycyclic aromatic hydrocarbons (PAHs) and aldehydes, and some of them have been proven carcinogenic or possibly carcinogenic to humans. Considering their chronic health effects, long-term exposure data is required for assessing cooking workers’ lifetime health risks. Previous exposure assessment studies, due to both time and cost constraints, mostly were based on the cross-sectional data. Therefore, establishing a long-term exposure data has become an important issue for conducting health risk assessment for cooking workers. An approach was proposed in this study. Here, the generation rates of both PAHs and aldehydes from a cooking process were determined by placing a sampling train exactly under the under the exhaust fan under the both the total enclosure condition and normal operating condition, respectively. Subtracting the concentration collected by the former (representing the total emitted concentration) from that of the latter (representing the hood collected concentration), the fugitive emitted concentration was determined. The above data was further converted to determine the generation rates based on the flow rates specified for the exhaust fan. The determinations of the above generation rates were conducted in a testing chamber with a selected cooking process (deep-frying chicken nuggets under 3 L peanut oil at 200°C). The sampling train installed under the exhaust fan consisted respectively an IOM inhalable sampler with a glass fiber filter for collecting particle-phase PAHs, followed by a XAD-2 tube for gas-phase PAHs. The above was also used to sample aldehydes, however, installed with a filter pre-coated with DNPH, and followed by a 2,4-DNPH-cartridge for collecting particle-phase and gas-phase aldehydes, respectively. PAHs and aldehydes samples were analyzed by GC/MS-MS (Agilent 7890B), and HPLC-UV (HITACHI L-7100), respectively. The obtained generation rates of both PAHs and aldehydes were applied to the near-field/ far-field exposure model to estimate the exposures of cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration). For validating purposes, both PAHs and aldehydes samplings were conducted simultaneously using the same sampling train at both near-field and far-field sites of the testing chamber. The sampling results, together with the use of the mixed-effect model, were used to calibrate the estimated near-field/ far-field exposures. In the present study, the obtained emission rates were further converted to emission factor of both PAHs and aldehydes according to the amount of food oil consumed. Applying the long-term food oil consumption records, the emission rates for both PAHs and aldehydes were determined, and the long-term exposure databanks for cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration) were then determined. Results show that the proposed approach was adequate to determine the generation rates of both PAHs and aldehydes under various fan exhaust flow rate conditions. The estimated near-field/ far-field exposures, though were significantly different from that obtained from the field, can be calibrated using the mixed effect model. Finally, the established long-term data bank could provide a useful basis for conducting long-term exposure assessments for cooking workers exposed to PAHs and aldehydes.

Keywords: aldehydes, cooking oil fumes, long-term exposure assessment, modeling, polycyclic aromatic hydrocarbons (PAHs)

Procedia PDF Downloads 113