Search results for: analytical validation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3562

Search results for: analytical validation

472 Designing Offshore Pipelines Facing the Geohazard of Active Seismic Faults

Authors: Maria Trimintziou, Michael Sakellariou, Prodromos Psarropoulos

Abstract:

Nowadays, the exploitation of hydrocarbons reserves in deep seas and oceans, in combination with the need to transport hydrocarbons among countries, has made the design, construction and operation of offshore pipelines very significant. Under this perspective, it is evident that many more offshore pipelines are expected to be constructed in the near future. Since offshore pipelines are usually crossing extended areas, they may face a variety of geohazards that impose substantial permanent ground deformations (PGDs) to the pipeline and potentially threaten its integrity. In case of a geohazard area, there exist three options to proceed. The first option is to avoid the problematic area through rerouting, which is usually regarded as an unfavorable solution due to its high cost. The second is to apply (if possible) mitigation/protection measures in order to eliminate the geohazard itself. Finally, the last appealing option is to allow the pipeline crossing through the geohazard area, provided that the pipeline will have been verified against the expected PGDs. In areas with moderate or high seismicity the design of an offshore pipeline is more demanding due to the earthquake-related geohazards, such as landslides, soil liquefaction phenomena, and active faults. It is worthy to mention that although worldwide there is a great experience in offshore geotechnics and pipeline design, the experience in seismic design of offshore pipelines is rather limited due to the fact that most of the pipelines have been constructed in non-seismic regions (e.g. North Sea, West Australia, Gulf of Mexico, etc.). The current study focuses on the seismic design of offshore pipelines against active faults. After an extensive literature review of the provisions of the seismic norms worldwide and of the available analytical methods, the study simulates numerically (through finite-element modeling and strain-based criteria) the distress of offshore pipelines subjected to PGDs induced by active seismic faults at the seabed. Factors, such as the geometrical properties of the fault, the mechanical properties of the ruptured soil formations, and the pipeline characteristics, are examined. After some interesting conclusions regarding the seismic vulnerability of offshore pipelines, potential cost-effective mitigation measures are proposed taking into account constructability issues.

Keywords: offhore pipelines, seismic design, active faults, permanent ground deformations (PGDs)

Procedia PDF Downloads 567
471 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 333
470 Comparison of GIS-Based Soil Erosion Susceptibility Models Using Support Vector Machine, Binary Logistic Regression and Artificial Neural Network in the Southwest Amazon Region

Authors: Elaine Lima Da Fonseca, Eliomar Pereira Da Silva Filho

Abstract:

The modeling of areas susceptible to soil loss by hydro erosive processes consists of a simplified instrument of reality with the purpose of predicting future behaviors from the observation and interaction of a set of geoenvironmental factors. The models of potential areas for soil loss will be obtained through binary logistic regression, artificial neural networks, and support vector machines. The choice of the municipality of Colorado do Oeste in the south of the western Amazon is due to soil degradation due to anthropogenic activities, such as agriculture, road construction, overgrazing, deforestation, and environmental and socioeconomic configurations. Initially, a soil erosion inventory map constructed through various field investigations will be designed, including the use of remotely piloted aircraft, orbital imagery, and the PLANAFLORO/RO database. 100 sampling units with the presence of erosion will be selected based on the assumptions indicated in the literature, and, to complement the dichotomous analysis, 100 units with no erosion will be randomly designated. The next step will be the selection of the predictive parameters that exert, jointly, directly, or indirectly, some influence on the mechanism of occurrence of soil erosion events. The chosen predictors are altitude, declivity, aspect or orientation of the slope, curvature of the slope, composite topographic index, flow power index, lineament density, normalized difference vegetation index, drainage density, lithology, soil type, erosivity, and ground surface temperature. After evaluating the relative contribution of each predictor variable, the erosion susceptibility model will be applied to the municipality of Colorado do Oeste - Rondônia through the SPSS Statistic 26 software. Evaluation of the model will occur through the determination of the values of the R² of Cox & Snell and the R² of Nagelkerke, Hosmer and Lemeshow Test, Log Likelihood Value, and Wald Test, in addition to analysis of the Confounding Matrix, ROC Curve and Accumulated Gain according to the model specification. The validation of the synthesis map resulting from both models of the potential risk of soil erosion will occur by means of Kappa indices, accuracy, and sensitivity, as well as by field verification of the classes of susceptibility to erosion using drone photogrammetry. Thus, it is expected to obtain the mapping of the following classes of susceptibility to erosion very low, low, moderate, very high, and high, which may constitute a screening tool to identify areas where more detailed investigations need to be carried out, applying more efficient social resources.

Keywords: modeling, susceptibility to erosion, artificial intelligence, Amazon

Procedia PDF Downloads 49
469 The Development of Documentary Filmmaking in Early Independent India

Authors: Camille Deprez

Abstract:

This paper proposes to present research findings of an ongoing Hong Kong government-funded project on ‘The Documentary Film in India (1948-1975)’ (GRF 1240314), for which an extensive research fieldwork has been carried out in various archives in India. This project investigates the role and significance of the Indian documentary film sector from the inauguration of the state-sponsored Films Division one year after independence in 1948 until the declaration of a ‘State of Emergency’ in 1975. The documentary film production of this first period of national independence was characterised by increasing formal experimentation and analytical social and political enquiry, and by a complex, mixed structure of state-sponsored monopoly and free-market operation. However, that production remains significantly under-researched. What were the main production, distribution and exhibition strategies over this period? What were the recurrent themes and stylistic features of the films produced? In the new context of national independence (in which the State considered film as means of mass persuasion), consolidation of the commercial film, and the emergence of television and art cinema, what role did official, professional and creative factors play in the development of the documentary film sector? What were the impact of such films and the challenges faced by the documentary film in India? Based upon the crossed-analysis of primary written research documents, interviews and relevant films, this study interweaves empirical study of the sector's financing, production, distribution and exhibition strategies, as well as the films' content and form, with the larger historical context of India over the period from 1948 to 1975. Whilst most of the films made within the sector explored social issues, they were rarely able to do so from an overtly critical perspective. However, this paper proposes to analyse the contribution of important filmmakers and producers, including Ezra Mir, Paul Zils, Jean Bhownagary, S. Sukhdev, S. N. S. Sastri, and P. Pati, to the development of the Indian documentary film sector and style within and outside the remits of Films Division. It will more specifically assess the extent to which they criticised the State, showed the inequalities in Indian society and explored film form.

Keywords: documentary film, film archives, film history, India

Procedia PDF Downloads 278
468 The Assessment of the Diabetes Mellitus Complications on Oral Health: A Longitudinal Study

Authors: Mimoza Canga, Irene Malagnino, Gresa Baboci, Edit Xhajanka, Vito Antonio Malagnino

Abstract:

Background: Diabetes mellitus is regarded as a very problematic chronic disease that has an effect on a considerable number of people around the world and it is straightforwardly associated with the oral health condition of the patients. Objective: The objective of this study is to analyze and evaluate the impact of diabetes mellitus on oral health. Materials and methods: In the present research were taken into consideration 300 patients with an age range of 11 to 80 years old. The study sample was composed of 191 males, respectively 63.7% of them and 109 females 36.3% of the participants. We divided them into seven age groups: 11-20, 21-30, 31-40, 41-50, 51-60, 61-70, and 71-80 years.This descriptive and analytical research was designed as a longitudinal study. Statistical analysis was performed using IBM SPSS 23.0 statistics. Results: The majority of patients participating in the study belonged to the age range from 41 to 50 years old, precisely 20.7% of them, while 27% of the patients were from 51 to 60 years old. Based on the present research, it resulted that 24.4% of the participant had high blood sugar values 250-300 mg/dl, whereas 19 % of the patients had very high blood sugar values 300-350 mg/dl. Based on the results of the current study, it was observed that 83.7% of patients were affected by gingivitis. In the current study, the significant finding is that 22% of patients had more than 7 teeth with dental caries and 21% of them had 5-7 teeth with dental caries, whereas 29% of the patients had 4-5 dental caries and the remaining 28% of them had 1-3 dental caries. The present study showed that most of the patients, 27% of them had lost more than 7 teeth and 22% of the participants had lost 5-7 teeth, whereas 31% of the patients had lost 4-5 teeth and only 20 % of them had lost 1-3 teeth. This study proved that high blood sugar values had a direct impact on the manifestation of gingivitis and there it was a strong correlation between them with P-value = .001. A strong correlation was found out between dental caries and high blood sugar values with P-value ˂.001. Males with diabetes mellitus were more affected by dental caries and this was proved by the P-value= .02, in comparison to females P-value=.03. The impact of high blood sugar values affects missing teeth and the correlation between them was statistically significant with P-value ˂ .001. Conclusion: The results of this study suggest that diabetes mellitus is a possible risk factor in oral health for the reason that Albanian patients over 51 years old, respectively 43% of them have over 5 teeth with dental caries as compared with 49% of the patients who had over 5 missing teeth, whereas the majority 83.7% of them suffered from gingivitis. This study asserts that patients who do not have periodical check-ups of diabetes mellitus are at significant risk of oral diseases.

Keywords: dental caries, diabetes mellitus, gingivitis, missing teeth

Procedia PDF Downloads 189
467 Saving the Decolonized Subject from Neglected Tropical Diseases: Public Health Campaign and Household-Centred Sanitation in Colonial West Africa, 1900-1960

Authors: Adebisi David Alade

Abstract:

In pre-colonial West Africa, the deadliness of the climate vis-a- vis malaria and other tropical diseases to Europeans turned the region into the “white man’s grave.” Thus, immediately after the partition of Africa in 1885, civilisatrice and mise en valeur not only became a pretext for the establishment of colonial rule; from a medical point of view, the control and possible eradication of disease in the continent emerged as one of the first concerns of the European colonizers. Though geared toward making Africa exploitable, historical evidence suggests that some colonial Water, Sanitation and Hygiene (WASH) policies and projects reduced certain tropical diseases in some West African communities. Exploring some of these disease control interventions by way of historical revisionism, this paper challenges the orthodox interpretation of colonial sanitation and public health measures in West Africa. This paper critiques the deployment of race and class as analytical tools for the study of colonial WASH projects, an exercise which often reduces the complexity and ambiguity of colonialism to the binary of colonizer and the colonized. Since West Africa presently ranks high among regions with Neglected Tropical Diseases (NTDs), it is imperative to decentre colonial racism and economic exploitation in African history in order to give room for Africans to see themselves in other ways. Far from resolving the problem of NTDs by fiat in the region, this study seeks to highlight important blind spots in African colonial history in an attempt to prevent post-colonial African leaders from throwing away the baby with the bath water. As scholars researching colonial sanitation and public health in the continent rarely examine its complex meaning and content, this paper submits that the outright demonization of colonial rule across space and time continues to build ideological wall between the present and the past which not only inhibit fruitful borrowing from colonial administration of West Africa, but also prevents a wide understanding of the challenges of WASH policies and projects in most West African states.

Keywords: colonial rule, disease control, neglected tropical diseases, WASH

Procedia PDF Downloads 168
466 Repeatable Surface Enhanced Raman Spectroscopy Substrates from SERSitive for Wide Range of Chemical and Biological Substances

Authors: Monika Ksiezopolska-Gocalska, Pawel Albrycht, Robert Holyst

Abstract:

Surface Enhanced Raman Spectroscopy (SERS) is a technique used to analyze very low concentrations of substances in solutions, even in aqueous solutions - which is its advantage over IR. This technique can be used in the pharmacy (to check the purity of products); forensics (whether at a crime scene there were any illegal substances); or medicine (serving as a medical test) and lots more. Due to the high potential of this technique, its increasing popularity in analytical laboratories, and simultaneously - the absence of appropriate platforms enhancing the SERS signal (crucial to observe the Raman effect at low analyte concentration in solutions (1 ppm)), we decided to invent our own SERS platforms. As an enhancing layer, we have chosen gold and silver nanoparticles, because these two have the best SERS properties, and each has an affinity for the other kind of particles, which increases the range of research capabilities. The next step was to commercialize them, which resulted in the creation of the company ‘SERSitive.eu’ focusing on production of highly sensitive (Ef = 10⁵ – 10⁶), homogeneous and reproducible (70 - 80%) substrates. SERStive SERS substrates are made using the electrodeposition of silver or silver-gold nanoparticles technique. Thanks to a very detailed analysis of data based on studies optimizing such parameters as deposition time, temperature of the reaction solution, applied potential, used reducer, or reagent concentrations using a standardized compound - p-mercaptobenzoic acid (PMBA) at a concentration of 10⁻⁶ M, we have developed a high-performance process for depositing precious metal nanoparticles on the surface of ITO glass. In order to check a quality of the SERSitive platforms, we examined the wide range of the chemical compounds and the biological substances. Apart from analytes that have great affinity to the metal surfaces (e.g. PMBA) we obtained very good results for those fitting less the SERS measurements. Successfully we received intensive, and what’s more important - very repetitive spectra for; amino acids (phenyloalanine, 10⁻³ M), drugs (amphetamine, 10⁻⁴ M), designer drugs (cathinone derivatives, 10⁻³ M), medicines and ending with bacteria (Listeria, Salmonella, Escherichia coli) and fungi.

Keywords: nanoparticles, Raman spectroscopy, SERS, SERS applications, SERS substrates, SERSitive

Procedia PDF Downloads 137
465 A Factor-Analytical Approach on Identities in Environmentally Significant Behavior

Authors: Alina M. Udall, Judith de Groot, Simon de Jong, Avi Shankar

Abstract:

There are many ways in which environmentally significant behavior can be explained. Dominant psychological theories, namely, the theory of planned behavior, the norm-activation theory, its extension, the value-belief-norm theory, and the theory of habit do not explain large parts of environmentally significant behaviors. A new and rapidly growing approach is to focus on how consumer’s identities predict environmentally significant behavior. Identity may be relevant because consumers have many identities that are assumed to guide their behavior. Therefore, we assume that many identities will guide environmentally significant behavior. Many identities can be relevant for environmentally significant behavior. In reviewing the literature, over 200 identities have been studied making it difficult to establish the key identities for explaining environmentally significant behavior. Therefore, this paper first aims to establish the key identities previously used for explaining environmentally significant behavior. Second, the aim is to test which key identities explain environmentally significant behavior. To address the aims, an online survey study (n = 578) is conducted. First, the exploratory factor analysis reveals 15 identity factors. The identity factors are namely, environmentally concerned identity, anti-environmental self-identity, environmental place identity, connectedness with nature identity, green space visitor identity, active ethical identity, carbon off-setter identity, thoughtful self-identity, close community identity, anti-carbon off-setter identity, environmental group member identity, national identity, identification with developed countries, cyclist identity, and thoughtful organisation identity. Furthermore, to help researchers understand and operationalize the identities, the article provides theoretical definitions for each of the identities, in line with identity theory, social identity theory, and place identity theory. Second, the hierarchical regression shows only 10 factors significantly uniquely explain the variance in environmentally significant behavior. In order of predictive power the identities are namely, environmentally concerned identity, anti-environmental self-identity, thoughtful self-identity, environmental group member identity, anti-carbon off-setter identity, carbon off-setter identity, connectedness with nature identity, national identity, and green space visitor identity. The identities explain over 60% of the variance in environmentally significant behavior, a large effect size. Based on this finding, the article reveals a new, theoretical framework showing the key identities explaining environmentally significant behavior, to help improve and align the field.

Keywords: environmentally significant behavior, factor analysis, place identity, social identity

Procedia PDF Downloads 433
464 Comparison Serum Vitamin D by Geographic between the Highland and Lowland Schizophrenic Patient in the Sumatera Utara

Authors: Novita Linda Akbar, Elmeida Effendy, Mustafa M. Amin

Abstract:

Background: The most common of psychotic disorders is schizophrenia. Vitamin D is made from sunlight, and in the skin from UVB radiation from sunlight. If people with Vitamin D deficiency is common severe mental illness such as schizophrenia.Schizophrenia is a chronic mental illness characterised by positive symptoms and negatives symptoms, such as hallucinations and delusions, flat affect and lack of motivation we can found. In patients with Schizophrenia maybe have several environmental risk factors for schizophrenia, such as season of birth, latitude, and climate has been linked to vitamin D deficiency. There is also relationship between the risk of schizophrenia and latitude, and with an increased incidence rate of schizophrenia seen at a higher latitude. Methods: This study was an analytical study, conducted in BLUD RS Jiwa Propinsi Sumatera Utara and RSUD Deli Serdang, the period in May 2016 and ended in June 2016 with a sample of the study 60 sample (20 patients live in the Highland and Lowland, 20 healthy controls). Inclusion criteria were schizophrenic patients both men and women, aged between 18 to 60 years old, acute phase no agitation or abstinence antipsychotic drugs for two weeks, live in the Highland and Lowland, and willing to participate this study. Exclusion criteria were history of other psychotic disorders, comorbidities with other common medical condition, a history of substance abuse. Sample inspection for serum vitamin D using ELFA method. Statistical analysis using numeric comparative T-independent test. Results: The results showed that average levels of vitamin D for a group of subjects living in areas of high land was 227.6 ng / mL with a standard deviation of 86.78 ng / mL, the lowest levels of vitamin D is 138 ng / mL and the highest 482 ng / mL. In the group of subjects who settled in the low lands seem mean vitamin D levels higher than the mountainous area with an average 237.8 ng / mL with a standard deviation of 100.16 ng / mL. Vitamin D levels are lowest and the highest 138-585 ng / mL. Conclusion and Suggestion: The results of the analysis using the Mann Whitney test showed that there were no significant differences between the mean for the levels of vitamin D based on residence subject with a value of p = 0.652.

Keywords: latitude, schizophrenia, Vitamin D, Sumatera Utara

Procedia PDF Downloads 238
463 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley

Abstract:

Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.

Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 60
462 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 88
461 Application Reliability Method for the Analysis of the Stability Limit States of Large Concrete Dams

Authors: Mustapha Kamel Mihoubi, Essadik Kerkar, Abdelhamid Hebbouche

Abstract:

According to the randomness of most of the factors affecting the stability of a gravity dam, probability theory is generally used to TESTING the risk of failure and there is a confusing logical transition from the state of stability failed state, so the stability failure process is considered as a probable event. The control of risk of product failures is of capital importance for the control from a cross analysis of the gravity of the consequences and effects of the probability of occurrence of identified major accidents and can incur a significant risk to the concrete dam structures. Probabilistic risk analysis models are used to provide a better understanding the reliability and structural failure of the works, including when calculating stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of the reliability analysis methods including the methods used in engineering. It is in our case of the use of level II methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type FORM (First Order Reliability Method), SORM (Second Order Reliability Method). By way of comparison, a second level III method was used which generates a full analysis of the problem and involving an integration of the probability density function of, random variables are extended to the field of security by using of the method of Mont-Carlo simulations. Taking into account the change in stress following load combinations: normal, exceptional and extreme the acting on the dam, calculation results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities thus causing a significant decrease in strength, especially in the presence of combinations of unique and extreme loads. Shear forces then induce a shift threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case THE increase of uplift in a hypothetical default of the drainage system.

Keywords: dam, failure, limit state, monte-carlo, reliability, probability, sliding, Taylor

Procedia PDF Downloads 307
460 Statistical Pattern Recognition for Biotechnological Process Characterization Based on High Resolution Mass Spectrometry

Authors: S. Fröhlich, M. Herold, M. Allmer

Abstract:

Early stage quantitative analysis of host cell protein (HCP) variations is challenging yet necessary for comprehensive bioprocess development. High resolution mass spectrometry (HRMS) provides a high-end technology for accurate identification alongside with quantitative information. Hereby we describe a flexible HRMS assay platform to quantify HCPs relevant in microbial expression systems such as E. Coli in both up and downstream development by means of MVDA tools. Cell pellets were lysed and proteins extracted, purified samples not further treated before applying the SMART tryptic digest kit. Peptides separation was optimized using an RP-UHPLC separation platform. HRMS-MSMS analysis was conducted on an Orbitrap Velos Elite applying CID. Quantification was performed label-free taking into account ionization properties and physicochemical peptide similarities. Results were analyzed using SIEVE 2.0 (Thermo Fisher Scientific) and SIMCA (Umetrics AG). The developed HRMS platform was applied to an E. Coli expression set with varying productivity and the corresponding downstream process. Selected HCPs were successfully quantified within the fmol range. Analysing HCP networks based on pattern analysis facilitated low level quantification and enhanced validity. This approach is of high relevance for high-throughput screening experiments during upstream development, e.g. for titer determination, dynamic HCP network analysis or product characterization. Considering the downstream purification process, physicochemical clustering of identified HCPs is of relevance to adjust buffer conditions accordingly. However, the technology provides an innovative approach for label-free MS based quantification relying on statistical pattern analysis and comparison. Absolute quantification based on physicochemical properties and peptide similarity score provides a technological approach without the need of sophisticated sample preparation strategies and is therefore proven to be straightforward, sensitive and highly reproducible in terms of product characterization.

Keywords: process analytical technology, mass spectrometry, process characterization, MVDA, pattern recognition

Procedia PDF Downloads 235
459 Vibration and Freeze-Thaw Cycling Tests on Fuel Cells for Automotive Applications

Authors: Gema M. Rodado, Jose M. Olavarrieta

Abstract:

Hydrogen fuel cell technologies have experienced a great boost in the last decades, significantly increasing the production of these devices for both stationary and portable (mainly automotive) applications; these are influenced by two main factors: environmental pollution and energy shortage. A fuel cell is an electrochemical device that converts chemical energy directly into electricity by using hydrogen and oxygen gases as reactive components and obtaining water and heat as byproducts of the chemical reaction. Fuel cells, specifically those of Proton Exchange Membrane (PEM) technology, are considered an alternative to internal combustion engines, mainly because of the low emissions they produce (almost zero), high efficiency and low operating temperatures (< 373 K). The introduction and use of fuel cells in the automotive market requires the development of standardized and validated procedures to test and evaluate their performance in different environmental conditions including vibrations and freeze-thaw cycles. These situations of vibration and extremely low/high temperatures can affect the physical integrity or even the excellent operation or performance of the fuel cell stack placed in a vehicle in circulation or in different climatic conditions. The main objective of this work is the development and validation of vibration and freeze-thaw cycling test procedures for fuel cell stacks that can be used in a vehicle in order to consolidate their safety, performance, and durability. In this context, different experimental tests were carried out at the facilities of the National Hydrogen Centre (CNH2). The experimental equipment used was: A vibration platform (shaker) for vibration test analysis on fuel cells in three axes directions with different vibration profiles. A walk-in climatic chamber to test the starting, operating, and stopping behavior of fuel cells under defined extreme conditions. A test station designed and developed by the CNH2 to test and characterize PEM fuel cell stacks up to 10 kWe. A 5 kWe PEM fuel cell stack in off-operation mode was used to carry out two independent experimental procedures. On the one hand, the fuel cell was subjected to a sinusoidal vibration test on the shaker in the three axes directions. It was defined by acceleration and amplitudes in the frequency range of 7 to 200 Hz for a total of three hours in each direction. On the other hand, the climatic chamber was used to simulate freeze-thaw cycles by defining a temperature range between +313 K and -243 K with an average relative humidity of 50% and a recommended ramp up and rump down of 1 K/min. The polarization curve and gas leakage rate were determined before and after the vibration and freeze-thaw tests at the fuel cell stack test station to evaluate the robustness of the stack. The results were very similar, which indicates that the tests did not affect the fuel cell stack structure and performance. The proposed procedures were verified and can be used as an initial point to perform other tests with different fuel cells.

Keywords: climatic chamber, freeze-thaw cycles, PEM fuel cell, shaker, vibration tests

Procedia PDF Downloads 100
458 Exploratory Study on Mediating Role of Commitment-to-Change in Relations between Employee Voice, Employee Involvement and Organizational Change Readiness

Authors: Rohini Sharma, Chandan Kumar Sahoo, Rama Krishna Gupta Potnuru

Abstract:

Strong competitive forces and requirements to achieve efficiency are forcing the organizations to realize the necessity and inevitability of change. What's more, the trend does not appear to be abating. Researchers have estimated that about two thirds of change project fails. Empirical evidences further shows that organizations invest significantly in the planned change but people side is accounted for in a token or instrumental way, which is identified as one of the important reason, why change endeavours fail. However, whatever be the reason for change, organizational change readiness must be gauged prior to the institutionalization of organizational change. Hence, in this study the influence of employee voice and employee involvement on organizational change readiness via commitment-to-change is examined, as it is an area yet to be extensively studied. Also, though a recent study has investigated the interrelationship between leadership, organizational change readiness and commitment to change, our study further examined these constructs in relation with employee voice and employee involvement that plays a consequential role for organizational change readiness. Further, integrated conceptual model weaving varied concepts relating to organizational readiness with focus on commitment to change as mediator was found to be an area, which required more theorizing and empirical validation, and this study rooted in an Indian public sector organization is a step in this direction. Data for the study were collected through a survey among employees of Rourkela Steel Plant (RSP), a unit of Steel Authority of India Limited (SAIL); the first integrated Steel Plant in the public sector in India, for which stratified random sampling method was adopted. The schedule was distributed to around 700 employees, out of which 516 complete responses were obtained. The pre-validated scales were used for the study. All the variables in the study were measured on a five-point Likert scale ranging from “strongly disagree (1)” to “strongly agree (5)”. Structural equation modeling (SEM) using AMOS 22 was used to examine the hypothesized model, which offers a simultaneous test of an entire system of variables in a model. The study results shows that inter-relationship between employee voice and commitment-to-change, employee involvement and commitment-to-change and commitment-to-change and organizational change readiness were significant. To test the mediation hypotheses, Baron and Kenny’s technique was used. Examination of direct and mediated effect of mediators confirmed that commitment-to-change partially mediated the relation between employee involvement and organizational change readiness. Furthermore, study results also affirmed that commitment-to-change does not mediate the relation between employee involvement and organizational change readiness. The empirical exploration therefore establishes that it is important to harness employee’s valuable suggestions regarding change for building organizational change readiness. Regarding employee involvement, it was found that sharing information and involving people in decision-making, leads to a creation of participative climate, which educes employee commitment during change and commitment-to-change further, fosters organizational change readiness.

Keywords: commitment-to-change, change management, employee voice, employee involvement, organizational change readiness

Procedia PDF Downloads 314
457 Building Student Empowerment through Live Commercial Projects: A Reflective Account of Participants

Authors: Nilanthi Ratnayake, Wen-Ling Liu

Abstract:

Prior research indicates an increasing gap between the skills and capabilities of graduates in the contemporary workplace across the globe. The challenge of addressing this issue primarily lies on the hands of higher education institutes/universities. In particular, surveys of UK employers and retailers found that soft skills including communication, numeracy, teamwork, confidence, analytical ability, digital/IT skills, business sense, language, and social skills are highly valued by graduate employers, and in achieving this, there are various assessed and non-assessed learning exercises have already been embedded into the university curriculum. To this end, this research study aims to explore the reflections of postgraduate student participation in a live commercial project (i.e. designing an advertising campaign for open days, summer school etc.) implemented with the intention of offering a transformative experience by deploying this project. Qualitative research methodology has been followed in this study, collecting data from three types of target audiences; students, academics and employers via a series of personal interviews and focus group discussions. Recorded data were transcribed, entered into NVIVO, and analysed using meaning condensation and content analysis. Students reported that they had a very positive impact towards improving self-efficacy, especially in relation to soft skills and confidence in seeking employment opportunities. In addition, this project has reduced cultural barriers for international students in general communications. Academic staff and potential employers who attended on the presentation day expressed their gratitude for offering a lifelong experience for students, and indeed believed that these type of projects contribute significantly to enhance skills and capabilities of students to cater the demands of employers. In essence, key findings demonstrate that an integration of knowledge-based skills into a live commercial project facilitate individuals to make the transition from education to employment in terms of skills, abilities and work behaviours more effectively in comparison to some other activities/assuagements that are currently in place in higher education institutions/universities.

Keywords: soft skills, commercially live project, higher education, student participation

Procedia PDF Downloads 342
456 Improving Collective Health and Social Care through a Better Consideration of Sex and Gender: Analytical Report by the French National Authority for Health

Authors: Thomas Suarez, Anne-Sophie Grenouilleau, Erwan Autin, Alexandre Biosse-Duplan, Emmanuelle Blondet, Laurence Chazalette, Marie Coniel, Agnes Dessaigne, Sylvie Lascols, Andrea Lasserre, Candice Legris, Pierre Liot, Aline Metais, Karine Petitprez, Christophe Varlet, Christian Saout

Abstract:

Background: The role of biological sex and gender identity -whether assigned or chosen- as health determinants are far from a recent discovery: several reports have stressed out how being a woman or a man could affect health on various scales. However, taking it into consideration beyond stereotypes and rigid binary assumptions still seems to be a work in progress. Method: The report is a synthesis on a variety of specific topics, each of which was studied by a specialist from the French National Authority for Health (HAS), through an analysis of existing literature on both healthcare policy construction process and instruments (norms, data analysis, clinical trials, guidelines, and professional practices). This work also implied a policy analysis of French recent public health laws and a retrospective study of guidelines with a gender mainstreaming approach. Results: The analysis showed that though sex and gender were well-known determinants of health, their consideration by both public policy and health operators was often incomplete, as it does not incorporate how sex and gender interact, as well as how they interact with other factors. As a result, the health and social care systems and their professionals tend to reproduce some stereotypical and inadequate habits. Though the data available often allows to take sex and gender into consideration, such data is often underused in practice guidelines and policy formulation. Another consequence is a lack of inclusiveness towards transgender or intersex persons. Conclusions: This report first urges for raising awareness of all the actors of health, in its broadest definition, that sex and gender matter beyond first-look conclusions. It makes a series of recommendations in order to reshape policy construction in the health sector on the one hand and to design public health instruments to make them more inclusive regarding sex and gender on the other hand. The HAS finally committed to integrate sex and gender preoccupations in its workings methods, to be a driving force in the spread of these concerns.

Keywords: biological sex, determinants of health, gender, healthcare policy instruments, social accompaniment

Procedia PDF Downloads 113
455 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry

Authors: C. A. Barros, Ana P. Barroso

Abstract:

Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.

Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis

Procedia PDF Downloads 202
454 Place-Based Practice: A New Zealand Rural Nursing Study

Authors: Jean Ross

Abstract:

Rural nursing is not an identified professional identity in the UK, unlike the USA, Canada, and Australia which recognizes rural nursing as a specialty scope of practice. In New Zealand rural nursing is an underrepresented aspect of nursing practice, is misunderstood and does not fit easily within the wider nursing profession and policies governing practice. This study situated within the New Zealand context adds to the international studies’ aligned with rural nursing practice. The study addresses a gap in the literature by striving to identify and strengthen the awareness of and increase rural nurses’ understanding and articulation of their changing and adapting identity and furthermore an opportunity to appreciate their contribution to the delivery of rural health care. In addition, this study adds to the growing global rural nursing knowledge and theoretical base. This research is a continuation of the author’s academic involvement and ongoing relationships with the rural nursing sector, national policy analysts and health care planners since the 1990s. These relationships have led to awareness, that despite rural nurses’ efforts to explain the particular nuances which make up their practice, there has been little recognition by profession to establish rural nursing as a specialty. The research explored why nurses’ who practiced in the rural Otago region of New Zealand, between the 1990s and early 2000s moved away from the traditional identity as a district, practice or public health nurse and looked towards a more appropriate identity which reflected their emerging practice. This qualitative research situated within the interpretive paradigm embeds this retrospective study within the discipline of nursing and engages with the concepts of place and governmentality. National key informant and Otago regional rural nurse interviews generated data and were analyzed using thematic analysis. Stemming from the analyses, an analytical diagrammatic matrix was developed demonstrating rural nursing as a ‘place–based practice’ governed both from within and beyond location presenting how the nurse aligns the self in the rural community as a meaningful provider of health care. Promoting this matrix may encourage a focal discussion point within the international spectrum of nursing and likewise between rural and non-rural nurses which it is hoped will generate further debate in relation to the different nuances aligned with rural nursing practice. Further, insights from this paper may capture key aspects and issues related to identity formation in respect to rural nurses, from the UK, New Zealand, Canada, USA, and Australia.

Keywords: matrix, place, nursing, rural

Procedia PDF Downloads 126
453 Mechanical, Thermal and Biodegradable Properties of Bioplast-Spruce Green Wood Polymer Composites

Authors: A. Atli, K. Candelier, J. Alteyrac

Abstract:

Environmental and sustainability concerns push the industries to manufacture alternative materials having less environmental impact. The Wood Plastic Composites (WPCs) produced by blending the biopolymers and natural fillers permit not only to tailor the desired properties of materials but also are the solution to meet the environmental and sustainability requirements. This work presents the elaboration and characterization of the fully green WPCs prepared by blending a biopolymer, BIOPLAST® GS 2189 and spruce sawdust used as filler with different amounts. Since both components are bio-based, the resulting material is entirely environmentally friendly. The mechanical, thermal, structural properties of these WPCs were characterized by different analytical methods like tensile, flexural and impact tests, Thermogravimetric Analysis (TGA), Differential Scanning Calorimetry (DSC) and X-ray Diffraction (XRD). Their water absorption properties and resistance to the termite and fungal attacks were determined in relation with different wood filler content. The tensile and flexural moduli of WPCs increased with increasing amount of wood fillers into the biopolymer, but WPCs became more brittle compared to the neat polymer. Incorporation of spruce sawdust modified the thermal properties of polymer: The degradation, cold crystallization, and melting temperatures shifted to higher temperatures when spruce sawdust was added into polymer. The termite, fungal and water absorption resistance of WPCs decreased with increasing wood amount in WPCs, but remained in durability class 1 (durable) concerning fungal resistance and quoted 1 (attempted attack) in visual rating regarding to the termites resistance except that the WPC with the highest wood content (30 wt%) rated 2 (slight attack) indicating a long term durability. All the results showed the possibility to elaborate the easy injectable composite materials with adjustable properties by incorporation of BIOPLAST® GS 2189 and spruce sawdust. Therefore, lightweight WPCs allow both to recycle wood industry byproducts and to produce a full ecologic material.

Keywords: biodegradability, color measurements, durability, mechanical properties, melt flow index, MFI, structural properties, thermal properties, wood-plastic composites, WPCs

Procedia PDF Downloads 126
452 Emotion Regulation and Executive Functioning Scale for Children and Adolescents (REMEX): Scale Development

Authors: Cristina Costescu, Carmen David, Adrian Roșan

Abstract:

Executive functions (EF) and emotion regulation strategies are processes that allow individuals to function in an adaptative way and to be goal-oriented, which is essential for success in daily living activities, at school, or in social contexts. The Emotion Regulation and Executive Functioning Scale for Children and Adolescents (REMEX) represents an empirically based tool (based on the model of EF developed by Diamond) for evaluating significant dimensions of child and adolescent EFs and emotion regulation strategies, mainly in school contexts. The instrument measures the following dimensions: working memory, inhibition, cognitive flexibility, executive attention, planning, emotional control, and emotion regulation strategies. Building the instrument involved not only a top-down process, as we selected the content in accordance with prominent models of FE, but also a bottom-up one, as we were able to identify valid contexts in which FE and ER are put to use. For the construction of the instrument, we implemented three focus groups with teachers and other professionals since the aim was to develop an accurate, objective, and ecological instrument. We used the focus group method in order to address each dimension and to yield a bank of items to be further tested. Each dimension is addressed through a task that the examiner will apply and through several items derived from the main task. For the validation of the instrument, we plan to use item response theory (IRT), also known as the latent response theory, that attempts to explain the relationship between latent traits (unobservable cognitive processes) and their manifestations (i.e., observed outcomes, responses, or performance). REMEX represents an ecological scale that integrates a current scientific understanding of emotion regulation and EF and is directly applicable to school contexts, and it can be very useful for developing intervention protocols. We plan to test his convergent validity with the Childhood Executive Functioning Inventory (CHEXI) and Emotion Dysregulation Inventory (EDI) and divergent validity between a group of typically developing children and children with neurodevelopmental disorders, aged between 6 and 9 years old. In a previous pilot study, we enrolled a sample of 40 children with autism spectrum disorders and attention-deficit/hyperactivity disorder aged 6 to 12 years old, and we applied the above-mentioned scales (CHEXI and EDI). Our results showed that deficits in planning, bebavior regulation, inhibition, and working memory predict high levels of emotional reactivity, leading to emotional and behavioural problems. Considering previous results, we expect our findings to provide support for the validity and reliability of the REMEX version as an ecological instrument for assessing emotion regulation and EF in children and for key features of its uses in intervention protocols.

Keywords: executive functions, emotion regulation, children, item response theory, focus group

Procedia PDF Downloads 87
451 Transient Heat Transfer: Experimental Investigation near the Critical Point

Authors: Andreas Kohlhepp, Gerrit Schatte, Wieland Christoph, Spliethoff Hartmut

Abstract:

In recent years the research of heat transfer phenomena of water and other working fluids near the critical point experiences a growing interest for power engineering applications. To match the highly volatile characteristics of renewable energies, conventional power plants need to shift towards flexible operation. This requires speeding up the load change dynamics of steam generators and their heating surfaces near the critical point. In dynamic load transients, both a high heat flux with an unfavorable ratio to the mass flux and a high difference in fluid and wall temperatures, may cause problems. It may lead to deteriorated heat transfer (at supercritical pressures), dry-out or departure from nucleate boiling (at subcritical pressures), all cases leading to an extensive rise of temperatures. For relevant technical applications, the heat transfer coefficients need to be predicted correctly in case of transient scenarios to prevent damage to the heated surfaces (membrane walls, tube bundles or fuel rods). In transient processes, the state of the art method of calculating the heat transfer coefficients is using a multitude of different steady-state correlations for the momentarily existing local parameters for each time step. This approach does not necessarily reflect the different cases that may lead to a significant variation of the heat transfer coefficients and shows gaps in the individual ranges of validity. An algorithm was implemented to calculate the transient behavior of steam generators during load changes. It is used to assess existing correlations for transient heat transfer calculations. It is also desirable to validate the calculation using experimental data. By the use of a new full-scale supercritical thermo-hydraulic test rig, experimental data is obtained to describe the transient phenomena under dynamic boundary conditions as mentioned above and to serve for validation of transient steam generator calculations. Aiming to improve correlations for the prediction of the onset of deteriorated heat transfer in both, stationary and transient cases the test rig was specially designed for this task. It is a closed loop design with a directly electrically heated evaporation tube, the total heating power of the evaporator tube and the preheater is 1MW. To allow a big range of parameters, including supercritical pressures, the maximum pressure rating is 380 bar. The measurements contain the most important extrinsic thermo-hydraulic parameters. Moreover, a high geometric resolution allows to accurately predict the local heat transfer coefficients and fluid enthalpies.

Keywords: departure from nucleate boiling, deteriorated heat transfer, dryout, supercritical working fluid, transient operation of steam generators

Procedia PDF Downloads 210
450 Rapid Discrimination of Porcine and Tilapia Fish Gelatin by Fourier Transform Infrared- Attenuated Total Reflection Combined with 2 Dimensional Infrared Correlation Analysis

Authors: Norhidayu Muhamad Zain

Abstract:

Gelatin, a purified protein derived mostly from porcine and bovine sources, is used widely in food manufacturing, pharmaceutical, and cosmetic industries. However, the presence of any porcine-related products are strictly forbidden for Muslim and Jewish consumption. Therefore, analytical methods offering reliable results to differentiate the sources of gelatin are needed. The aim of this study was to differentiate the sources of gelatin (porcine and tilapia fish) using Fourier transform infrared- attenuated total reflection (FTIR-ATR) combined with two dimensional infrared (2DIR) correlation analysis. Porcine gelatin (PG) and tilapia fish gelatin (FG) samples were diluted in distilled water at concentrations ranged from 4-20% (w/v). The samples were then analysed using FTIR-ATR and 2DIR correlation software. The results showed a significant difference in the pattern map of synchronous spectra at the region of 1000 cm⁻¹ to 1100 cm⁻¹ between PG and FG samples. The auto peak at 1080 cm⁻¹ that attributed to C-O functional group was observed at high intensity in PG samples compared to FG samples. Meanwhile, two auto peaks (1080 cm⁻¹ and 1030 cm⁻¹) at lower intensity were identified in FG samples. In addition, using 2D correlation analysis, the original broad water OH bands in 1D IR spectra can be effectively differentiated into six auto peaks located at 3630, 3340, 3230, 3065, 2950 and 2885 cm⁻¹ for PG samples and five auto peaks at 3630, 3330, 3230, 3060 and 2940 cm⁻¹ for FG samples. Based on the rule proposed by Noda, the sequence of the spectral changes in PG samples is as following: NH₃⁺ amino acid > CH₂ and CH₃ aliphatic > OH stretch > carboxylic acid OH stretch > NH in secondary amide > NH in primary amide. In contrast, the sequence was totally in the opposite direction for FG samples and thus both samples provide different 2D correlation spectra ranged from 2800 cm-1 to 3700 cm⁻¹. This method may provide a rapid determination of gelatin source for application in food, pharmaceutical, and cosmetic products.

Keywords: 2 dimensional infrared (2DIR) correlation analysis, Fourier transform infrared- attenuated total reflection (FTIR-ATR), porcine gelatin, tilapia fish gelatin

Procedia PDF Downloads 227
449 Two-Dimensional Dynamics Motion Simulations of F1 Rare Wing-Flap

Authors: Chaitanya H. Acharya, Pavan Kumar P., Gopalakrishna Narayana

Abstract:

In the realm of aerodynamics, numerous vehicles incorporate moving components to enhance their performance. For instance, airliners deploy hydraulically operated flaps and ailerons during take-off and landing, while Formula 1 racing cars utilize hydraulic tubes and actuators for various components, including the Drag Reduction System (DRS). The DRS, consisting of a rear wing and adjustable flaps, plays a crucial role in overtaking manoeuvres. The DRS has two positions: the default position with the flaps down, providing high downforce, and the lifted position, which reduces drag, allowing for increased speed and aiding in overtaking. Swift deployment of the DRS during races is essential for overtaking competitors. The fluid flow over the rear wing flap becomes intricate during deployment, involving flow reversal and operational changes, leading to unsteady flow physics that significantly influence aerodynamic characteristics. Understanding the drag and downforce during DRS deployment is crucial for determining race outcomes. While experiments can yield accurate aerodynamic data, they can be expensive and challenging to conduct across varying speeds. Computational Fluid Dynamics (CFD) emerges as a cost-effective solution to predict drag and downforce across a range of speeds, especially with the rapid deployment of the DRS. This study employs the finite volume-based solver Ansys Fluent, incorporating dynamic mesh motions and a turbulent model to capture the complex flow phenomena associated with the moving rear wing flap. A dedicated section for the rare wing-flap is considered in the present simulations, and the aerodynamics of these sections closely resemble S1223 aerofoils. Before delving into the simulations of the rare wing-flap aerofoil, numerical results undergo validation using experimental data from an NLR flap aerofoil case, encompassing different flap angles at two distinct angles of attack was carried out. The increase in flap angle as increase in lift and drag is observed for a given angle of attack. The simulation methodology for the rare-wing-flap aerofoil case involves specific time durations before lifting the flap. During this period, drag and downforce values are determined as 330 N and 1800N, respectively. Following the flap lift, a noteworthy reduction in drag to 55 % and a decrease in downforce to 17 % are observed. This understanding is critical for making instantaneous decisions regarding the deployment of the Drag Reduction System (DRS) at specific speeds, thereby influencing the overall performance of the Formula 1 racing car. Hence, this work emphasizes the utilization of dynamic mesh motion methodology to predict the aerodynamic characteristics during the deployment of the DRS in a Formula 1 racing car.

Keywords: DRS, CFD, drag, downforce, dynamics mesh motion

Procedia PDF Downloads 79
448 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis

Authors: Mehrnaz Mostafavi

Abstract:

The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.

Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans

Procedia PDF Downloads 65
447 Exo-III Assisted Amplification Strategy through Target Recycling of Hg²⁺ Detection in Water: A GNP Based Label-Free Colorimetry Employing T-Rich Hairpin-Loop Metallobase

Authors: Abdul Ghaffar Memon, Xiao Hong Zhou, Yunpeng Xing, Ruoyu Wang, Miao He

Abstract:

Due to deleterious environmental and health effects of the Hg²⁺ ions, various online, detection methods apart from the traditional analytical tools have been developed by researchers. Biosensors especially, label, label-free, colorimetric and optical sensors have advanced with sensitive detection. However, there remains a gap of ultrasensitive quantification as noise interact significantly especially in the AuNP based label-free colorimetry. This study reported an amplification strategy using Exo-III enzyme for target recycling of Hg²⁺ ions in a T-rich hairpin loop metallobase label-free colorimetric nanosensor with an improved sensitivity using unmodified gold nanoparticles (uGNPs) as an indicator. The two T-rich metallobase hairpin loop structures as 5’- CTT TCA TAC ATA GAA AAT GTA TGT TTG -3 (HgS1), and 5’- GGC TTT GAG CGC TAA GAA A TA GCG CTC TTT G -3’ (HgS2) were tested in the study. The thermodynamic properties of HgS1 and HgS2 were calculated using online tools (http://biophysics.idtdna.com/cgi-bin/meltCalculator.cgi). The lab scale synthesized uGNPs were utilized in the analysis. The DNA sequence had T-rich bases on both tails end, which in the presence of Hg²⁺ forms a T-Hg²⁺-T mismatch, promoting the formation of dsDNA. Later, the Exo-III incubation enable the enzyme to cleave stepwise mononucleotides from the 3’ end until the structure become single-stranded. These ssDNA fragments then adsorb on the surface of AuNPs in their presence and protect AuNPs from the induced salt aggregation. The visible change in color from blue (aggregation stage in the absence of Hg²⁺) and pink (dispersion state in the presence of Hg²⁺ and adsorption of ssDNA fragments) can be observed and analyzed through UV spectrometry. An ultrasensitive quantitative nanosensor employing Exo-III assisted target recycling of mercury ions through label-free colorimetry with nanomolar detection using uGNPs have been achieved and is further under the optimization to achieve picomolar range by avoiding the influence of the environmental matrix. The proposed strategy will supplement in the direction of uGNP based ultrasensitive, rapid, onsite, label-free colorimetric detection.

Keywords: colorimetric, Exo-III, gold nanoparticles, Hg²⁺ detection, label-free, signal amplification

Procedia PDF Downloads 296
446 Music Genre Classification Based on Non-Negative Matrix Factorization Features

Authors: Soyon Kim, Edward Kim

Abstract:

In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.

Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)

Procedia PDF Downloads 279
445 Contribution of Word Decoding and Reading Fluency on Reading Comprehension in Young Typical Readers of Kannada Language

Authors: Vangmayee V. Subban, Suzan Deelan. Pinto, Somashekara Haralakatta Shivananjappa, Shwetha Prabhu, Jayashree S. Bhat

Abstract:

Introduction and Need: During early years of schooling, the instruction in the schools mainly focus on children’s word decoding abilities. However, the skilled readers should master all the components of reading such as word decoding, reading fluency and comprehension. Nevertheless, the relationship between each component during the process of learning to read is less clear. The studies conducted in alphabetical languages have mixed opinion on relative contribution of word decoding and reading fluency on reading comprehension. However, the scenarios in alphasyllabary languages are unexplored. Aim and Objectives: The aim of the study was to explore the role of word decoding, reading fluency on reading comprehension abilities in children learning to read Kannada between the age ranges of 5.6 to 8.6 years. Method: In this cross sectional study, a total of 60 typically developing children, 20 each from Grade I, Grade II, Grade III maintaining equal gender ratio between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. The reading fluency and reading comprehension abilities of the children were assessed using Grade level passages selected from the Kannada text book of children core curriculum. All the passages consist of five questions to assess reading comprehension. The pseudoword decoding skills were assessed using 40 pseudowords with varying syllable length and their Akshara composition. Pseudowords are formed by interchanging the syllables within the meaningful word while maintaining the phonotactic constraints of Kannada language. The assessment material was subjected to content validation and reliability measures before collecting the data on the study samples. The data were collected individually, and reading fluency was assessed for words correctly read per minute. Pseudoword decoding was scored for the accuracy of reading. Results: The descriptive statistics indicated that the mean pseudoword reading, reading comprehension, words accurately read per minute increased with the Grades. The performance of Grade III children found to be higher, Grade I lower and Grade II remained intermediate of Grade III and Grade I. The trend indicated that reading skills gradually improve with the Grades. Pearson’s correlation co-efficient showed moderate and highly significant (p=0.00) positive co-relation between the variables, indicating the interdependency of all the three components required for reading. The hierarchical regression analysis revealed 37% variance in reading comprehension was explained by pseudoword decoding and was highly significant. Subsequent entry of reading fluency measure, there was no significant change in R-square and was only change 3%. Therefore, pseudoword-decoding evolved as a single most significant predictor of reading comprehension during early Grades of reading acquisition. Conclusion: The present study concludes that the pseudoword decoding skills contribute significantly to reading comprehension than reading fluency during initial years of schooling in children learning to read Kannada language.

Keywords: alphasyllabary, pseudo-word decoding, reading comprehension, reading fluency

Procedia PDF Downloads 245
444 Caffeic Acid in Cosmetic Formulations: An Innovative Assessment

Authors: Caroline M. Spagnol, Vera L. B. Isaac, Marcos A. Corrêa, Hérida R. N. Salgado

Abstract:

Phenolic compounds are abundant in the Brazilian plant kingdom and they are part of a large and complex group of organic substances. Cinnamic acids are part of this group of organic compounds, and caffeic acid (CA) is one of its representatives. Antioxidants are compounds which act as free radical scavengers and, in other cases, such as metal chelators, both in the initiation stage and the propagation of oxidative process. The tyrosinase, polyphenol oxidase, is an enzyme that acts at various stages of melanin biosynthesis within the melanocytes and is considered a key molecule in this process. Some phenolic compounds exhibit inhibitory effects on melanogenesis by inhibiting the tyrosinase enzymatic activity and therefore has been the subject of studies. However, few studies have reported the effectiveness of these products and their safety. Objectives: To assess the inhibitory activity of tyrosinase, the antioxidant activity of CA and its cytotoxic potential. The method to evaluate the inhibitory activity of tyrosinase aims to assess the reduction transformation of L-dopa into dopaquinone reactions catalyzed by the enzyme. For evaluating the antioxidant activity was used the analytical methodology of DPPH radical inhibition. The cytotoxicity evaluation was carried out using the MTT method (3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide), a colorimetric assay which determines the amount of insoluble violet crystals formed by the reduction of MTT in the mitochondria of living cells. Based on the results obtained during the study, CA has low activity as a depigmenting agent. However, it is a more potent antioxidant than ascorbic acid (AA), since a lower amount of CA is sufficient to inhibit 50% of DPPH radical. The results are promising since CA concentration that promoted 50% toxicity in HepG2 cells (IC50=781.8 μg/mL) is approximately 330 to 400 times greater than the concentration required to inhibit 50% of DPPH (IC50 DPPH= 2.39 μg/mL) and ABTS (IC50 ABTS= 1.96 μg/mL) radicals scavenging activity, respectively. The maximum concentration of caffeic acid tested (1140 mg /mL) did not reach 50% of cell death in HaCat cells. Thus, it was concluded that the caffeic acid does not cause toxicity in HepG2 and HaCat cells in the concentrations required to promote antioxidant activity in vitro, and it can be applied in topical products.

Keywords: caffeic acid, antioxidant, cytotoxicity, cosmetic

Procedia PDF Downloads 363
443 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction

Authors: Ahlad Neti, Elisa Arch, Martha Hall

Abstract:

Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.

Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices

Procedia PDF Downloads 136