Search results for: relational processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3887

Search results for: relational processing

2957 Clinical Validation of an Automated Natural Language Processing Algorithm for Finding COVID-19 Symptoms and Complications in Patient Notes

Authors: Karolina Wieczorek, Sophie Wiliams

Abstract:

Introduction: Patient data is often collected in Electronic Health Record Systems (EHR) for purposes such as providing care as well as reporting data. This information can be re-used to validate data models in clinical trials or in epidemiological studies. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. Mentioning a disease in a discharge letter does not necessarily mean that a patient suffers from this disease. Many of them discuss a diagnostic process, different tests, or discuss whether a patient has a certain disease. The COVID-19 dataset in this study used natural language processing (NLP), an automated algorithm which extracts information related to COVID-19 symptoms, complications, and medications prescribed within the hospital. Free-text patient clinical patient notes are rich sources of information which contain patient data not captured in a structured form, hence the use of named entity recognition (NER) to capture additional information. Methods: Patient data (discharge summary letters) were exported and screened by an algorithm to pick up relevant terms related to COVID-19. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. A list of 124 Systematized Nomenclature of Medicine (SNOMED) Clinical Terms has been provided in Excel with corresponding IDs. Two independent medical student researchers were provided with a dictionary of SNOMED list of terms to refer to when screening the notes. They worked on two separate datasets called "A” and "B”, respectively. Notes were screened to check if the correct term had been picked-up by the algorithm to ensure that negated terms were not picked up. Results: Its implementation in the hospital began on March 31, 2020, and the first EHR-derived extract was generated for use in an audit study on June 04, 2020. The dataset has contributed to large, priority clinical trials (including International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC) by bulk upload to REDcap research databases) and local research and audit studies. Successful sharing of EHR-extracted datasets requires communicating the provenance and quality, including completeness and accuracy of this data. The results of the validation of the algorithm were the following: precision (0.907), recall (0.416), and F-score test (0.570). Percentage enhancement with NLP extracted terms compared to regular data extraction alone was low (0.3%) for relatively well-documented data such as previous medical history but higher (16.6%, 29.53%, 30.3%, 45.1%) for complications, presenting illness, chronic procedures, acute procedures respectively. Conclusions: This automated NLP algorithm is shown to be useful in facilitating patient data analysis and has the potential to be used in more large-scale clinical trials to assess potential study exclusion criteria for participants in the development of vaccines.

Keywords: automated, algorithm, NLP, COVID-19

Procedia PDF Downloads 92
2956 Combined Synchrotron Radiography and Diffraction for in Situ Study of Reactive Infiltration of Aluminum into Iron Porous Preform

Authors: S. Djaziri, F. Sket, A. Hynowska, S. Milenkovic

Abstract:

The use of Fe-Al based intermetallics as an alternative to Cr/Ni based stainless steels is very promising for industrial applications that use critical raw materials parts under extreme conditions. However, the development of advanced Fe-Al based intermetallics with appropriate mechanical properties presents several challenges that involve appropriate processing and microstructure control. A processing strategy is being developed which aims at producing a net-shape porous Fe-based preform that is infiltrated with molten Al or Al-alloy. In the present work, porous Fe-based preforms produced by two different methods (selective laser melting (SLM) and Kochanek-process (KE)) are studied during infiltration with molten aluminum. In the objective to elucidate the mechanisms underlying the formation of Fe-Al intermetallic phases during infiltration, an in-house furnace has been designed for in situ observation of infiltration at synchrotron facilities combining x-ray radiography (XR) and x-ray diffraction (XRD) techniques. The feasibility of this approach has been demonstrated, and information about the melt flow front propagation has been obtained. In addition, reactive infiltration has been achieved where a bi-phased intermetallic layer has been identified to be formed between the solid Fe and liquid Al. In particular, a tongue-like Fe₂Al₅ phase adhering to the Fe and a needle-like Fe₄Al₁₃ phase adhering to the Al were observed. The growth of the intermetallic compound was found to be dependent on the temperature gradient present along the preform as well as on the reaction time which will be discussed in view of the different obtained results.

Keywords: combined synchrotron radiography and diffraction, Fe-Al intermetallic compounds, in-situ molten Al infiltration, porous solid Fe preforms

Procedia PDF Downloads 218
2955 Reverse Logistics Network Optimization for E-Commerce

Authors: Albert W. K. Tan

Abstract:

This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.

Keywords: reverse logistics, supply chain management, optimization, e-commerce

Procedia PDF Downloads 28
2954 Redesigning the Plant Distribution of an Industrial Laundry in Arequipa

Authors: Ana Belon Hercilla

Abstract:

The study is developed in “Reactivos Jeans” company, in the city of Arequipa, whose main business is the laundry of garments at an industrial level. In 2012 the company initiated actions to provide a dry cleaning service of alpaca fiber garments, recognizing that this item is in a growth phase in Peru. Additionally this company took the initiative to use a new greenwashing technology which has not yet been developed in the country. To accomplish this, a redesign of both the process and the plant layout was required. For redesigning the plant, the methodology used was the Systemic Layout Planning, allowing this study divided into four stages. First stage is the information gathering and evaluation of the initial situation of the company, for which a description of the areas, facilities and initial equipment, distribution of the plant, the production process and flows of major operations was made. Second stage is the development of engineering techniques that allow the logging and analysis procedures, such as: Flow Diagram, Route Diagram, DOP (process flowchart), DAP (analysis diagram). Then the planning of the general distribution is carried out. At this stage, proximity factors of the areas are established, the Diagram Paths (TRA) is developed, and the Relational Diagram Activities (DRA). In order to obtain the General Grouping Diagram (DGC), further information is complemented by a time study and Guerchet method is used to calculate the space requirements for each area. Finally, the plant layout redesigning is presented and the implementation of the improvement is made, making it possible to obtain a model much more efficient than the initial design. The results indicate that the implementation of the new machinery, the adequacy of the plant facilities and equipment relocation resulted in a reduction of the production cycle time by 75.67%, routes were reduced by 68.88%, the number of activities during the process were reduced by 40%, waits and storage were removed 100%.

Keywords: redesign, time optimization, industrial laundry, greenwashing

Procedia PDF Downloads 385
2953 Context Detection in Spreadsheets Based on Automatically Inferred Table Schema

Authors: Alexander Wachtel, Michael T. Franzen, Walter F. Tichy

Abstract:

Programming requires years of training. With natural language and end user development methods, programming could become available to everyone. It enables end users to program their own devices and extend the functionality of the existing system without any knowledge of programming languages. In this paper, we describe an Interactive Spreadsheet Processing Module (ISPM), a natural language interface to spreadsheets that allows users to address ranges within the spreadsheet based on inferred table schema. Using the ISPM, end users are able to search for values in the schema of the table and to address the data in spreadsheets implicitly. Furthermore, it enables them to select and sort the spreadsheet data by using natural language. ISPM uses a machine learning technique to automatically infer areas within a spreadsheet, including different kinds of headers and data ranges. Since ranges can be identified from natural language queries, the end users can query the data using natural language. During the evaluation 12 undergraduate students were asked to perform operations (sum, sort, group and select) using the system and also Excel without ISPM interface, and the time taken for task completion was compared across the two systems. Only for the selection task did users take less time in Excel (since they directly selected the cells using the mouse) than in ISPM, by using natural language for end user software engineering, to overcome the present bottleneck of professional developers.

Keywords: natural language processing, natural language interfaces, human computer interaction, end user development, dialog systems, data recognition, spreadsheet

Procedia PDF Downloads 296
2952 Occupational Stress and Lipid Profile among Drivers in Ismailia City, Egypt

Authors: Amani Waheed, Adel Mishriky, Rasha Farouk, Essam Abdallah, Sarah Hussein

Abstract:

Background: Occupational stress plays a crucial role in professional drivers' health. They are exposed to high workloads, low physical activity, high demand and low decisions as well as poor lifestyle factors including poor diet, sedentary work, and smoking. Dyslipidemia is a well-established modifiable cardiovascular risk factor. Occupational stress and other forms of chronic stress have been associated with raised levels of atherogenic lipids. Although stress management has some evidence in improving lipid profile, the association between occupational stress and dyslipidemia is not clear. Objectives: To assess the relational between occupational stress and lipid profile among professional drivers. Methodology: A cross-sectional study conducted at a large company in Ismailia City, Egypt, where, 131 professional drivers divided into 44 car drivers, 43 bus drivers, and 44 truck drivers were eligible after applying exclusion criteria. Occupational stress index (OSI), non-occupational risk factors of dyslipidemia were assessed using interview structured questionnaire. Blood pressure, body mass index (BMI) and lipid profile were measured. Results: The mean of total OSI score was 79.98 ± 6.14. The total OSI score is highest among truck drivers (82.16 ± 4.62), then bus drivers (80.26 ± 6.02) and lowest among car drivers (77.55 ± 6.79) with statistically significant. Eighty percent had Dyslipidemia. The duration of driving hours per day, exposure to passive smoking and increased BMI were the risk factors. No statistical significance between Total OSI score and dyslipidemia. Using, logistic regression analysis, occupational stress, duration of driving hours per day, and BMI were positive significant predictors for dyslipidemia. Conclusion: Professional drivers are exposed to occupational stress. A high proportion of drivers have dyslipidemia. Total OSI score doesn't have statistically significant relation with dyslipidemia.

Keywords: body mass index, dyslipidaemia, occupational stress, professional drivers

Procedia PDF Downloads 152
2951 Studying the Effect of Reducing Thermal Processing over the Bioactive Composition of Non-Centrifugal Cane Sugar: Towards Natural Products with High Therapeutic Value

Authors: Laura Rueda-Gensini, Jader Rodríguez, Juan C. Cruz, Carolina Munoz-Camargo

Abstract:

There is an emerging interest in botanicals and plant extracts for medicinal practices due to their widely reported health benefits. A large variety of phytochemicals found in plants have been correlated with antioxidant, immunomodulatory, and analgesic properties, which makes plant-derived products promising candidates for modulating the progression and treatment of numerous diseases. Non-centrifugal cane sugar (NCS), in particular, has been known for its high antioxidant and nutritional value, but composition-wise variability due to changing environmental and processing conditions have considerably limited its use in the nutraceutical and biomedical fields. This work is therefore aimed at assessing the effect of thermal exposure during NCS production over its bioactive composition and, in turn, its therapeutic value. Accordingly, two modified dehydration methods are proposed that employ: (i) vacuum-aided evaporation, which reduces the necessary temperatures to dehydrate the sample, and (ii) window refractance evaporation, which reduces thermal exposure time. The biochemical composition of NCS produced under these two methods was compared to traditionally-produced NCS by estimating their total polyphenolic and protein content with Folin-Ciocalteu and Bradford assays, as well as identifying the major phenolic compounds in each sample via HPLC-coupled mass spectrometry. Their antioxidant activities were also compared as measured by their scavenging potential of ABTS and DPPH radicals. Results show that the two modified production methods enhance polyphenolic and protein yield in resulting NCS samples when compared to traditional production methods. In particular, reducing employed temperatures with vacuum-aided evaporation demonstrated to be superior at preserving polyphenolic compounds, as evidenced both in the total and individual polyphenol concentrations. However, antioxidant activities were not significantly different between these. Although additional studies should be performed to determine if the observed compositional differences affect other therapeutic activities (e.g., anti-inflammatory, analgesic, and immunoprotective), these results suggest that reducing thermal exposure holds great promise for the production of natural products with enhanced nutritional value.

Keywords: non-centrifugal cane sugar, polyphenolic compounds, thermal processing, antioxidant activity

Procedia PDF Downloads 82
2950 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: data fusion, Gaussian process regression, signal denoise, temporal extrapolation

Procedia PDF Downloads 130
2949 The Relation between Cognitive Fluency and Utterance Fluency in Second Language Spoken Fluency: Studying Fluency through a Psycholinguistic Lens

Authors: Tannistha Dasgupta

Abstract:

This study explores the aspects of second language (L2) spoken fluency that are related to L2 linguistic knowledge and processing skill. It draws on Levelt’s ‘blueprint’ of the L2 speaker which discusses the cognitive issues underlying the act of speaking. However, L2 speaking assessments have largely neglected the underlying mechanism involved in language production; emphasis is given on the relationship between subjective ratings of L2 speech sample and objectively measured aspects of fluency. Hence, in this study, the relation between L2 linguistic knowledge and processing skill i.e. Cognitive Fluency (CF), and objectively measurable aspects of L2 spoken fluency i.e. Utterance Fluency (UF) is examined. The participants of the study are L2 learners of English, studying at high school level in Hyderabad, India. 50 participants with intermediate level of proficiency in English performed several lexical retrieval tasks and attention-shifting tasks to measure CF, and 8 oral tasks to measure UF. Each aspect of UF (speed, pause, and repair) were measured against the scores of CF to find out those aspects of UF which are reliable indicators of CF. Quantitative analysis of the data shows that among the three aspects of UF; speed is the best predictor of CF, and pause is weakly related to CF. The study suggests that including the speed aspect of UF could make L2 fluency assessment more reliable, valid, and objective. Thus, incorporating the assessment of psycholinguistic mechanisms into L2 spoken fluency testing, could result in fairer evaluation.

Keywords: attention-shifting, cognitive fluency, lexical retrieval, utterance fluency

Procedia PDF Downloads 702
2948 Digitalisation of the Railway Industry: Recent Advances in the Field of Dialogue Systems: Systematic Review

Authors: Andrei Nosov

Abstract:

This paper discusses the development directions of dialogue systems within the digitalisation of the railway industry, where technologies based on conversational AI are already potentially applied or will be applied. Conversational AI is one of the popular natural language processing (NLP) tasks, as it has great prospects for real-world applications today. At the same time, it is a challenging task as it involves many areas of NLP based on complex computations and deep insights from linguistics and psychology. In this review, we focus on dialogue systems and their implementation in the railway domain. We comprehensively review the state-of-the-art research results on dialogue systems and analyse them from three perspectives: type of problem to be solved, type of model, and type of system. In particular, from the perspective of the type of tasks to be solved, we discuss characteristics and applications. This will help to understand how to prioritise tasks. In terms of the type of models, we give an overview that will allow researchers to become familiar with how to apply them in dialogue systems. By analysing the types of dialogue systems, we propose an unconventional approach in contrast to colleagues who traditionally contrast goal-oriented dialogue systems with open-domain systems. Our view focuses on considering retrieval and generative approaches. Furthermore, the work comprehensively presents evaluation methods and datasets for dialogue systems in the railway domain to pave the way for future research. Finally, some possible directions for future research are identified based on recent research results.

Keywords: digitalisation, railway, dialogue systems, conversational AI, natural language processing, natural language understanding, natural language generation

Procedia PDF Downloads 57
2947 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier

Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh

Abstract:

This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.

Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems

Procedia PDF Downloads 32
2946 Sentiment Analysis of Chinese Microblog Comments: Comparison between Support Vector Machine and Long Short-Term Memory

Authors: Xu Jiaqiao

Abstract:

Text sentiment analysis is an important branch of natural language processing. This technology is widely used in public opinion analysis and web surfing recommendations. At present, the mainstream sentiment analysis methods include three parts: sentiment analysis based on a sentiment dictionary, based on traditional machine learning, and based on deep learning. This paper mainly analyzes and compares the advantages and disadvantages of the SVM method of traditional machine learning and the Long Short-term Memory (LSTM) method of deep learning in the field of Chinese sentiment analysis, using Chinese comments on Sina Microblog as the data set. Firstly, this paper classifies and adds labels to the original comment dataset obtained by the web crawler, and then uses Jieba word segmentation to classify the original dataset and remove stop words. After that, this paper extracts text feature vectors and builds document word vectors to facilitate the training of the model. Finally, SVM and LSTM models are trained respectively. After accuracy calculation, it can be obtained that the accuracy of the LSTM model is 85.80%, while the accuracy of SVM is 91.07%. But at the same time, LSTM operation only needs 2.57 seconds, SVM model needs 6.06 seconds. Therefore, this paper concludes that: compared with the SVM model, the LSTM model is worse in accuracy but faster in processing speed.

Keywords: sentiment analysis, support vector machine, long short-term memory, Chinese microblog comments

Procedia PDF Downloads 81
2945 Mobile Communication Technologies, Romantic Attachment and Relationship Quality: An Exploration of Partner Attunement

Authors: Jodie Bradnam, Mark Edwards, Bruce Watt

Abstract:

Mobile technologies have emerged as tools to create and sustain social and romantic relationships. The integration of technologies in close relationships has been of particular research interest with findings supporting the positive role of mobile phones in nurturing feelings of closeness and connection. More recently, the use of text messaging to manage conflict has become a focus of research attention. Four hundred and eleven adults in committed romantic relationships completed a series of questionnaires measuring attachment orientation, relationship quality, texting frequencies, attitudes, and response expectations. Attachment orientation, relationship length, texting for connection and disconnection were significant predictors of relationship quality, specifically relationship intimacy. Text frequency varied as a function of attachment orientation, with high attachment anxiety associated with high texting frequencies and with low relationship quality. Sending text messages of love and support was related to higher intimacy and relationship satisfaction scores, while sending critical or impersonal texts was associated with significantly lower intimacy and relationship satisfaction scores. The use of texting to manage relational conflict was a stronger negative predictor of relationship satisfaction than was the use of texting to express love and affection. Consistent with research on face-to-face communication in couples, the expression of negative sentiments via text were related to lower relationship quality, and these negative sentiments had a stronger and more enduring impact on relationship quality than did the expression of positive sentiments. Attachment orientation, relationship length and relationship status emerged as variables of interest in understanding the use of mobile technologies in romantic relationships.

Keywords: attachment, destructive conflict, intimacy, mobile communication, relationship quality, relationship satisfaction, texting

Procedia PDF Downloads 375
2944 An Analysis of a Canadian Personalized Learning Curriculum

Authors: Ruthanne Tobin

Abstract:

The shift to a personalized learning (PL) curriculum in Canada represents an innovative approach to teaching and learning that is also evident in various initiatives across the 32-nation OECD. The premise behind PL is that empowering individual learners to have more input into how they access and construct knowledge, and express their understanding of it, will result in more meaningful school experiences and academic success. In this paper presentation, the author reports on a document analysis of the new curriculum in the province of British Columbia. Three theoretical frameworks are used to analyze the new curriculum. Framework 1 focuses on five dominant aspects (FDA) of PL at the classroom level. Framework 2 focuses on conceptualizing and enacting personalized learning (CEPL) within three spheres of influence. Framework 3 focuses on the integration of three types of knowledge (content, technological, and pedagogical). Analysis is ongoing, but preliminary findings suggest that the new curriculum addresses framework 1 quite well, which identifies five areas of personalized learning: 1) assessment for learning; 2) effective teaching and learning; 3) curriculum entitlement (choice); 4) school organization; and 5) “beyond the classroom walls” (learning in the community). Framework 2 appears to be less well developed in the new curriculum. This framework speaks to the dynamics of PL within three spheres of interaction: 1) nested agency, comprised of overarching constraints [and enablers] from policy makers, school administrators and community; 2) relational agency, which refers to a capacity for professionals to develop a network of expertise to serve shared goals; and 3) students’ personalized learning experience, which integrates differentiation with self-regulation strategies. Framework 3 appears to be well executed in the new PL curriculum, as it employs the theoretical model of technological, pedagogical content knowledge (TPACK) in which there are three interdependent bodies of knowledge. Notable within this framework is the emphasis on the pairing of technologies with excellent pedagogies to significantly assist students and teachers. This work will be of high relevance to educators interested in innovative school reform.

Keywords: curriculum reform, K-12 school change, innovations in education, personalized learning

Procedia PDF Downloads 271
2943 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 99
2942 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: coupled dynamics, geometric complexity, proper orthogonal decomposition (POD), thin walled beams

Procedia PDF Downloads 412
2941 The Psychology of Virtual Relationships Provides Solutions to the Challenges of Online Learning: A Pragmatic Review and Case Study from the University of Birmingham, UK

Authors: Catherine Mangan, Beth Anderson

Abstract:

There has been a significant drive to use online or hybrid learning in Higher Education (HE) over recent years. HEs with a virtual presence offer their communities a range of benefits, including the potential for greater inclusivity, diversity, and collaboration; more flexible learning packages; and more engaging, dynamic content. Institutions can also experience significant challenges when seeking to extend learning spaces in this way, as can learners themselves. For example, staff members’ and learners’ digital literacy varies (as do their perceptions of technologies in use), and there can be confusion about optimal approaches to implementation. Furthermore, the speed with which HE institutions have needed to shift to fully online or hybrid models, owing to the COVID19 pandemic, has highlighted the significant barriers to successful implementation. HE environments have been shown to predict a range of organisational, academic, and experiential outcomes, both positive and negative. Much research has focused on the social aspect of virtual platforms, as well as the nature and effectiveness of the technologies themselves. There remains, however, a relative paucity of synthesised knowledge on the psychology of learners’ relationships with their institutions; specifically, how individual difference and interpersonal factors predict students’ ability and willingness to engage with novel virtual learning spaces. Accordingly, extending learning spaces remains challenging for institutions, and wholly remote courses, in particular, can experience high attrition rates. Focusing on the last five years, this pragmatic review summarises evidence from the psychological and pedagogical literature. In particular, the review highlights the importance of addressing the psychological and relational complexities of students’ shift from offline to online engagement. In doing so, it identifies considerations for HE institutions looking to deliver in this way.

Keywords: higher education, individual differences, interpersonal relationships, online learning, virtual environment

Procedia PDF Downloads 169
2940 Connected Female Sufi Disciples: The Workings of Social Online Communities in a Transnational Sufi Order

Authors: Sarah Hebbouch

Abstract:

Two decades ago, research on diasporic women’s participation within Sufi circles would have been inconceivable, not only because of a general lack of recognition of their contribution to Sufism but due to the intimacy of the rituals, often taking place in confined spaces, like zawiyas (Sufi lodges). Recent scholarly attention to female spiritual experience owes to a digital awareness and interest in exploring diasporic community reproduction of those experiences. Within a context where female disciples of a Sufi convent undergo a physical separation from the saint’s sanctuary -because of immigration from the homeland to the host country- technology becomes a social hub accounting for Sufis’ ritual commitment and preservation of cultural capital in the diaspora. This paper elucidates how female Sufi immigrants affiliating with the Boudchichi brotherhood (Morocco-based) maintain ‘a relational network’ and strong social online relationships with their female compatriots in Morocco through the use of online platforms. Sufi communities living in the diaspora find the internet an open interactive space that serves to kindle their distance of spiritual participation and corroborate their transnational belonging. The current paper explores the implications of the use of a digital baseline named “Tariqa Info,” the convent’s digital online platform, and how it mediates everyday ritual performance, the promotion of digital connection, and the communication of ideas and discourses. Such a platform serves the bolstering emotional bonds for transnational female disciples and inclusion within online communities in the homeland. Assisted by an ethnographic lens, this paper discusses the research findings of participatory field observation of Sufi women’s online communities, informed by the need to trace the many ostensible aspects of interconnectedness and divergences.

Keywords: digital connection, Sufi convent, social online relationship, transnational female disciples

Procedia PDF Downloads 79
2939 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification

Procedia PDF Downloads 144
2938 MXene-Based Self-Sensing of Damage in Fiber Composites

Authors: Latha Nataraj, Todd Henry, Micheal Wallock, Asha Hall, Christine Hatter, Babak Anasori, Yury Gogotsi

Abstract:

Multifunctional composites with enhanced strength and toughness for superior damage tolerance are essential for advanced aerospace and military applications. Detection of structural changes prior to visible damage may be achieved by incorporating fillers with tunable properties such as two-dimensional (2D) nanomaterials with high aspect ratios and more surface-active sites. While 2D graphene with large surface areas, good mechanical properties, and high electrical conductivity seems ideal as a filler, the single-atomic thickness can lead to bending and rolling during processing, requiring post-processing to bond to polymer matrices. Lately, an emerging family of 2D transition metal carbides and nitrides, MXenes, has attracted much attention since their discovery in 2011. Metallic electronic conductivity and good mechanical properties, even with increased polymer content, coupled with hydrophilicity make MXenes a good candidate as a filler material in polymer composites and exceptional as multifunctional damage indicators in composites. Here, we systematically study MXene-based (Ti₃C₂) coated on glass fibers for fiber reinforced polymer composite for self-sensing using microscopy and micromechanical testing. Further testing is in progress through the investigation of local variations in optical, acoustic, and thermal properties within the damage sites in response to strain caused by mechanical loading.

Keywords: damage sensing, fiber composites, MXene, self-sensing

Procedia PDF Downloads 115
2937 Mobile Augmented Reality for Collaboration in Operation

Authors: Chong-Yang Qiao

Abstract:

Mobile augmented reality (MAR) tracking targets from the surroundings and aids operators for interactive data and procedures visualization, potential equipment and system understandably. Operators remotely communicate and coordinate with each other for the continuous tasks, information and data exchange between control room and work-site. In the routine work, distributed control system (DCS) monitoring and work-site manipulation require operators interact in real-time manners. The critical question is the improvement of user experience in cooperative works through applying Augmented Reality in the traditional industrial field. The purpose of this exploratory study is to find the cognitive model for the multiple task performance by MAR. In particular, the focus will be on the comparison between different tasks and environment factors which influence information processing. Three experiments use interface and interaction design, the content of start-up, maintenance and stop embedded in the mobile application. With the evaluation criteria of time demands and human errors, and analysis of the mental process and the behavior action during the multiple tasks, heuristic evaluation was used to find the operators performance with different situation factors, and record the information processing in recognition, interpretation, judgment and reasoning. The research will find the functional properties of MAR and constrain the development of the cognitive model. Conclusions can be drawn that suggest MAR is easy to use and useful for operators in the remote collaborative works.

Keywords: mobile augmented reality, remote collaboration, user experience, cognition model

Procedia PDF Downloads 191
2936 Automatic Segmentation of 3D Tomographic Images Contours at Radiotherapy Planning in Low Cost Solution

Authors: D. F. Carvalho, A. O. Uscamayta, J. C. Guerrero, H. F. Oliveira, P. M. Azevedo-Marques

Abstract:

The creation of vector contours slices (ROIs) on body silhouettes in oncologic patients is an important step during the radiotherapy planning in clinic and hospitals to ensure the accuracy of oncologic treatment. The radiotherapy planning of patients is performed by complex softwares focused on analysis of tumor regions, protection of organs at risk (OARs) and calculation of radiation doses for anomalies (tumors). These softwares are supplied for a few manufacturers and run over sophisticated workstations with vector processing presenting a cost of approximately twenty thousand dollars. The Brazilian project SIPRAD (Radiotherapy Planning System) presents a proposal adapted to the emerging countries reality that generally does not have the monetary conditions to acquire some radiotherapy planning workstations, resulting in waiting queues for new patients treatment. The SIPRAD project is composed by a set of integrated and interoperabilities softwares that are able to execute all stages of radiotherapy planning on simple personal computers (PCs) in replace to the workstations. The goal of this work is to present an image processing technique, computationally feasible, that is able to perform an automatic contour delineation in patient body silhouettes (SIPRAD-Body). The SIPRAD-Body technique is performed in tomography slices under grayscale images, extending their use with a greedy algorithm in three dimensions. SIPRAD-Body creates an irregular polyhedron with the Canny Edge adapted algorithm without the use of preprocessing filters, as contrast and brightness. In addition, comparing the technique SIPRAD-Body with existing current solutions is reached a contours similarity at least 78%. For this comparison is used four criteria: contour area, contour length, difference between the mass centers and Jaccard index technique. SIPRAD-Body was tested in a set of oncologic exams provided by the Clinical Hospital of the University of Sao Paulo (HCRP-USP). The exams were applied in patients with different conditions of ethnology, ages, tumor severities and body regions. Even in case of services that have already workstations, it is possible to have SIPRAD working together PCs because of the interoperability of communication between both systems through the DICOM protocol that provides an increase of workflow. Therefore, the conclusion is that SIPRAD-Body technique is feasible because of its degree of similarity in both new radiotherapy planning services and existing services.

Keywords: radiotherapy, image processing, DICOM RT, Treatment Planning System (TPS)

Procedia PDF Downloads 290
2935 A 3D Bioprinting System for Engineering Cell-Embedded Hydrogels by Digital Light Processing

Authors: Jimmy Jiun-Ming Su, Yuan-Min Lin

Abstract:

Bioprinting has been applied to produce 3D cellular constructs for tissue engineering. Microextrusion printing is the most common used method. However, printing low viscosity bioink is a challenge for this method. Herein, we developed a new 3D printing system to fabricate cell-laden hydrogels via a DLP-based projector. The bioprinter is assembled from affordable equipment including a stepper motor, screw, LED-based DLP projector, open source computer hardware and software. The system can use low viscosity and photo-polymerized bioink to fabricate 3D tissue mimics in a layer-by-layer manner. In this study, we used gelatin methylacrylate (GelMA) as bioink for stem cell encapsulation. In order to reinforce the printed construct, surface modified hydroxyapatite has been added in the bioink. We demonstrated the silanization of hydroxyapatite could improve the crosslinking between the interface of hydroxyapatite and GelMA. The results showed that the incorporation of silanized hydroxyapatite into the bioink had an enhancing effect on the mechanical properties of printed hydrogel, in addition, the hydrogel had low cytotoxicity and promoted the differentiation of embedded human bone marrow stem cells (hBMSCs) and retinal pigment epithelium (RPE) cells. Moreover, this bioprinting system has the ability to generate microchannels inside the engineered tissues to facilitate diffusion of nutrients. We believe this 3D bioprinting system has potential to fabricate various tissues for clinical applications and regenerative medicine in the future.

Keywords: bioprinting, cell encapsulation, digital light processing, GelMA hydrogel

Procedia PDF Downloads 168
2934 Dairy Products on the Algerian Market: Proportion of Imitation and Degree of Processing

Authors: Bentayeb-Ait Lounis Saïda, Cheref Zahia, Cherifi Thizi, Ri Kahina Bahmed, Kahina Hallali Yasmine Abdellaoui, Kenza Adli

Abstract:

Algeria is the leading consumer of dairy products in North Africa. This is a fact. However, the nutritional quality of the latter remains unknown. The aim of this study is to characterise the dairy products available on the Algerian market in order to assess whether they constitute a healthy and safe choice. To do this, it collected data on the labelling of 390 dairy products, including cheese, yoghurt, UHT milk and milk drinks, infant formula and dairy creams. We assessed their degree of processing according to the NOVA classification, as well as the proportion of imitation products. The study was carried out between March 2020 and August 2023. The results show that 88% are ultra-processed; 84% for 'cheese', 92% for dairy creams, 92% for 'yoghurt', 100% for infant formula, 92% for margarines and 36% for UHT milk/dairy drinks. As for imitation/analogue dairy products, the study revealed the following proportions: 100% for infant formula, 78% for butter/margarine, 18% for UHT milk/milk-based drinks, 54% for cheese, 2% for camembert and 75% for dairy cream. The harmful effects of consuming ultra-processed products on long-term health are increasingly documented in dozens of publications. The findings of this study sound the alarm about the health risks to which Algerian consumers are exposed. Various scientific, economic and industrial bodies need to be involved in order to safeguard consumer health in both the short and long term. Food awareness and education campaigns should be organised.

Keywords: dairy, UPF, NOVA, yoghurt, cheese

Procedia PDF Downloads 19
2933 Agile Smartphone Porting and App Integration of Signal Processing Algorithms Obtained through Rapid Development

Authors: Marvin Chibuzo Offiah, Susanne Rosenthal, Markus Borschbach

Abstract:

Certain research projects in Computer Science often involve research on existing signal processing algorithms and developing improvements on them. Research budgets are usually limited, hence there is limited time for implementing the algorithms from scratch. It is therefore common practice, to use implementations provided by other researchers as a template. These are most commonly provided in a rapid development, i.e. 4th generation, programming language, usually Matlab. Rapid development is a common method in Computer Science research for quickly implementing and testing new developed algorithms, which is also a common task within agile project organization. The growing relevance of mobile devices in the computer market also gives rise to the need to demonstrate the successful executability and performance measurement of these algorithms on a mobile device operating system and processor, particularly on a smartphone. Open mobile systems such as Android, are most suitable for this task, which is to be performed most efficiently. Furthermore, efficiently implementing an interaction between the algorithm and a graphical user interface (GUI) that runs exclusively on the mobile device is necessary in cases where the project’s goal statement also includes such a task. This paper examines different proposed solutions for porting computer algorithms obtained through rapid development into a GUI-based smartphone Android app and evaluates their feasibilities. Accordingly, the feasible methods are tested and a short success report is given for each tested method.

Keywords: SMARTNAVI, Smartphone, App, Programming languages, Rapid Development, MATLAB, Octave, C/C++, Java, Android, NDK, SDK, Linux, Ubuntu, Emulation, GUI

Procedia PDF Downloads 475
2932 Mother as Troubles Teller: A Discourse Analytic Case Study of Mother-Adolescent Daughter Interaction

Authors: Domenica L. DelPrete

Abstract:

Viewed as a type of rapport-talk, troubles telling is a common conversational practice among female friends who wish to establish connection, show empathy, or share a disconcerting experience. This study shows how troubles talk between a mother and her adolescent daughter has a different interactional outcome. Specifically, it reveals how discursive interaction with an adolescent daughter becomes increasingly volatile when the mother steps out of the role of nurturer and into the role of troubles teller. Naturally occurring interactions between a mother and her 15-year-old daughter were videotaped in their family home over a two-week period. The data were primarily analyzed from an interactional sociolinguistic perspective, using conversation analytic techniques for transcriptions and discursive analysis. The following questions guided this research: (1) How are troubles telling discursively accomplished in the everyday talk of a mother and her adolescent daughter? and (2) What topic prompts the mother to engage in troubles talk? The data show that the mother engages her daughter in troubles to talk on issues related to body image and physical appearance and does so by (1) repeated questioning, (2) not accepting the daughter’s response as adequate, and (3) proffering self-deprecation. Findings reveal that engaging an adolescent daughter in a conversational practice reserved for female friendship groups creates a negative connection and relational disharmony. Since 'telling one’s troubles' assumes an egalitarian relationship between individuals, mother’s trouble telling creates a peer-like interaction that the adolescent daughter repeatedly resists. This study also proposes a discursive consciousness raising, which hopes to enhance communication between mothers and daughters by revealing the signals that show an adolescent daughter’s unwillingness to participate in troubles talk. Being in tune to these cues may prompt mothers to hesitate before pursuing a topic that will not garner the positive interactional outcome they seek.

Keywords: discursive interaction, maternal roles, mother-daughter interaction, troubles telling

Procedia PDF Downloads 122
2931 An Analytical Systematic Design Approach to Evaluate Ballistic Performance of Armour Grade AA7075 Aluminium Alloy Using Friction Stir Processing

Authors: Lahari Ramya Pa, Sudhakar Ib, Madhu Vc, Madhusudhan Reddy Gd, Srinivasa Rao E.

Abstract:

Selection of suitable armor materials for defense applications is very crucial with respect to increasing mobility of the systems as well as maintaining safety. Therefore, determining the material with the lowest possible areal density that resists the predefined threat successfully is required in armor design studies. A number of light metal and alloys are come in to forefront especially to substitute the armour grade steels. AA5083 aluminium alloy which fit in to the military standards imposed by USA army is foremost nonferrous alloy to consider for possible replacement of steel to increase the mobility of armour vehicles and enhance fuel economy. Growing need of AA5083 aluminium alloy paves a way to develop supplement aluminium alloys maintaining the military standards. It has been witnessed that AA 2xxx aluminium alloy, AA6xxx aluminium alloy and AA7xxx aluminium alloy are the potential material to supplement AA5083 aluminium alloy. Among those cited aluminium series alloys AA7xxx aluminium alloy (heat treatable) possesses high strength and can compete with armour grade steels. Earlier investigations revealed that layering of AA7xxx aluminium alloy can prevent spalling of rear portion of armour during ballistic impacts. Hence, present investigation deals with fabrication of hard layer (made of boron carbide) i.e. layer on AA 7075 aluminium alloy using friction stir processing with an intention of blunting the projectile in the initial impact and backing tough portion(AA7xxx aluminium alloy) to dissipate residual kinetic energy. An analytical approach has been adopted to unfold the ballistic performance of projectile. Penetration of projectile inside the armour has been resolved by considering by strain energy model analysis. Perforation shearing areas i.e. interface of projectile and armour is taken in to account for evaluation of penetration inside the armour. Fabricated surface composites (targets) were tested as per the military standard (JIS.0108.01) in a ballistic testing tunnel at Defence Metallurgical Research Laboratory (DMRL), Hyderabad in standardized testing conditions. Analytical results were well validated with experimental obtained one.

Keywords: AA7075 aluminium alloy, friction stir processing, boron carbide, ballistic performance, target

Procedia PDF Downloads 320
2930 Sustainable Radiation Curable Palm Oil-Based Products for Advanced Materials Applications

Authors: R. Tajau, R. Rohani, M. S. Alias, N. H. Mudri, K. A. Abdul Halim, M. H. Harun, N. Mat Isa, R. Che Ismail, S. Muhammad Faisal, M. Talib, M. R. Mohamed Zin

Abstract:

Bio-based polymeric materials are increasingly used for a variety of applications, including surface coating, drug delivery systems, and tissue engineering. These polymeric materials are ideal for the aforementioned applications because they are derived from natural resources, non-toxic, low-cost, biocompatible, and biodegradable, and have promising thermal and mechanical properties. The nature of hydrocarbon chains, carbon double bonds, and ester bonds allows various sources of oil (edible), such as soy, sunflower, olive, and oil palm, to fine-tune their particular structures in the development of innovative materials. Palm oil can be the most eminent raw material used for manufacturing new and advanced natural polymeric materials involving radiation techniques, such as coating resins, nanoparticles, scaffold, nanotubes, nanocomposites, and lithography for different branches of the industry in countries where oil palm is abundant. The radiation technique is among the most versatile, cost-effective, simple, and effective methods. Crosslinking, reversible addition-fragmentation chain transfer (RAFT), polymerisation, grafting, and degradation are among the radiation mechanisms. Exposure to gamma, EB, UV, or laser irradiation, which are commonly used in the development of polymeric materials, is used in these mechanisms. Therefore, this review focuses on current radiation processing technologies for the development of various radiation-curable bio-based polymeric materials with a promising future in biomedical and industrial applications. The key focus of this review is on radiation curable palm oil-based products, which have been published frequently in recent studies.

Keywords: palm oil, radiation processing, surface coatings, VOC

Procedia PDF Downloads 177
2929 Optimization of Extraction Conditions and Characteristics of Scale collagen From Sardine: Sardina pilchardus

Authors: F. Bellali, M. Kharroubi, M. Loutfi, N.Bourhim

Abstract:

In Morocco, fish processing industry is an important source income for a large amount of byproducts including skins, bones, heads, guts and scales. Those underutilized resources particularly scales contain a large amount of proteins and calcium. Scales from Sardina plichardus resulting from the transformation operation have the potential to be used as raw material for the collagen production. Taking into account this strong expectation of the regional fish industry, scales sardine upgrading is well justified. In addition, political and societal demands for sustainability and environment-friendly industrial production systems, coupled with the depletion of fish resources, drive this trend forward. Therefore, fish scale used as a potential source to isolate collagen has a wide large of applications in food, cosmetic and bio medical industry. The main aim of this study is to isolate and characterize the acid solubilize collagen from sardine fish scale, Sardina pilchardus. Experimental design methodology was adopted in collagen processing for extracting optimization. The first stage of this work is to investigate the optimization conditions of the sardine scale deproteinization on using response surface methodology (RSM). The second part focus on the demineralization with HCl solution or EDTA. Moreover, the last one is to establish the optimum condition for the isolation of collagen from fish scale by solvent extraction. The basic principle of RSM is to determinate model equations that describe interrelations between the independent variables and the dependent variables.

Keywords: Sardina pilchardus, scales, valorization, collagen extraction, response surface methodology

Procedia PDF Downloads 408
2928 Microbial Dynamics and Sensory Traits of Spanish- and Greek-Style Table Olives (Olea europaea L. cv. Ascolana tenera) Fermented with Sea Fennel (Crithmum maritimum L.)

Authors: Antonietta Maoloni, Federica Cardinali, Vesna Milanović, Andrea Osimani, Ilario Ferrocino, Maria Rita Corvaglia, Luca Cocolin, Lucia Aquilanti

Abstract:

Table olives (Olea europaea L.) are among the most important fermented vegetables all over the world, while sea fennel (Crithmum maritimum L.) is an emerging food crop with interesting nutritional and sensory traits. Both of them are characterized by the presence of several bioactive compounds with potential beneficial health effects, thus representing two valuable substrates for the manufacture of innovative vegetable-based preserves. Given these premises, the present study was aimed at exploring the co-fermentation of table olives and sea fennel to produce new high-value preserves. Spanish style or Greek style processing method and the use of a multiple strain starter were explored. The preserves were evaluated for their microbial dynamics and key sensory traits. During the fermentation, a progressive pH reduction was observed. Mesophilic lactobacilli, mesophilic lactococci, and yeasts were the main microbial groups at the end of the fermentation, whereas Enterobacteriaceae decreased during fermentation. An evolution of the microbiota was revealed by metataxonomic analysis, with Lactiplantibacillus plantarum dominating in the late stage of fermentation, irrespective of processing method and use of the starter. Greek style preserves resulted in more crunchy and less fibrous than Spanish style one and were preferred by trained panelists.

Keywords: lactic acid bacteria, Lactiplantibacillus plantarum, metataxonomy, panel test, rock samphire

Procedia PDF Downloads 122